Sunday, July 29, 2018

Innovation by Accident


Not every product we used today was planned out perfectly from the start. It is often the case that products are discovered accidently, or the way they are used commercially have nothing to do with the original intent of the products creator. Famous examples of this are things like the slinky toy, x-rays, and the microwave. Each of these inventions were created during research and engineering efforts to solve other problems. The slinky came into existence when an engineer dropped a large industrial spring, x-ray imaging and the microwave were both invented while researching radio waves. There are other instances where a byproduct or quick fix action ends up becoming a prominent feature of a system. Two technologies that fit this description are Network Address Translation (NAT) and Short Message Service (SMS) text messaging.
   
Network Address Translation

To understand the accidental impact of NAT, we must first briefly review the history of the Internet. The Internet that we use today was first developed as a communications network for the United States military. The Advanced Research Projects Agency (ARPA) built ARPAnet in 1969 as a way to connect military mainframe computers together. The original addressing scheme for ARPAnet was 8 bit and allowed for 256 different host addresses. The original ARPAnet started with 4 hosts and quickly grew to 213 hosts by 1981 (Bort, 1995). Realizing the limitations of the ARPAnet addressing scheme, Robert Khan and Vinton Cerf started working on a new addressing scheme. The fourth iteration of their work produced IPv4 addressing. IP stands for Internet Protocol and v4 is the fourth version they created. IPv4 is a series of 4 8 bit addresses that can support 4,294,967,296 unique addresses (Bort, 1995). This version was a tremendous upgrade from the original 256 possible supported hosts, but due to the rapid expansion of the Internet in the 1990’s, the mismanagement of address allocation, and other technical issues with routing and traffic management, the Internet was facing a real problem or running out of space that needed to be addressed. IPv4 address space was very large and there needed to be a way to route traffic to the appropriate networks around the Internet. The original solution was to build classes into the address space. IPv4 classes are simply a way to identify the size of a network based on the first 8-bit value in the address. The classes were broken up into 4 classes:

·         Class A –
o   First bit value of 0 – 127
o   126 networks with 16,777,214 hosts each
·         Class B –
o   First bit value of 128 – 191
o   16,384 networks with 65,534 hosts each
·         Class C –
o   First bit value of 192 – 223
o   2,097,152 networks with 254 hosts each
·         Class D & E –
o   First bit value of 224 – 255
o   Used for multicasting and R&D

This wasn’t a perfect solution because the amount of addresses per class were not scalable. For example, there are only 126 class A networks that could be given out, and each one had almost 17 million usable hosts. There are over 2 million class C networks that can be given out, but each one only has 254 available hosts. During the 1990’s, many large companies were given class A networks and didn’t use anywhere near the amount of hosts they had available, so those addresses were essentially lost. Meanwhile, smaller companies grew and required more and more class C networks as their demand for hosts increased. The class C networks weren’t given out in sequence, so companies had network addresses that weren’t mathematically close to each other, which increased the difficulty of routing Internet traffic. The problem was quickly becoming unmanageable.  
IPv6 was developed to expand the total address space for the Internet. It could support 340,282,366,920,938,463,463,374,607,431,768,211,456 total addresses (Loshin, 2001)! The problem was that IPv6 wasn’t drafted until 1998 and it was taking much too long to become a standard. A temporary solution needed to be found. NAT was developed to temporarily solve the issue with lack of address space but ended up solving many other issues that it essentially delayed the IPv6 rollout of IPv6 for almost 14 years! NAT is a protocol that runs on a router that borders an internal network and the Internet (Trowbridge, 1997). What NAT does is simply translate IP addresses on the internal network with IP addresses being used on the Internet. The feature that makes NAT so useful is that this translation can be one to many. This means that an organization can host multiple systems internally while only using one address to access the Internet. NAT adds information into the header information of network traffic that is used to assign that traffic an internal and external IP address to use. This way one external IP address can be used to provide Internet connectivity to multiple hosts. NAT inadvertently solved many of the problems with IPv4. Since IPv4 addresses could be reused internally, corporations only needed a few valid IP addresses to provide Internet connectivity to all their hosts. NAT averted the risk of running out of addresses so successfully that IPv6 could be delayed for years with almost no repercussions. This allowed IPv6 development to continue and provided a very robust addressing solution that should allow for sustainable address space for years to come. There were several other factors that dealt with IP addressing that contributed and augmented NAT, such as Classless inter-domain routing, that can be explored to provide a more detailed picture as to how NAT helped change the way the internet worked.

Text Messages

Text messaging has quickly become the standard way of communicating with mobile phones over the last 15 years, and it wasn’t a feature that was planned to be used for commercial use at all. Telephone lines have historically been analog systems, which means that they use waveforms to transmit voice and data instead of digital data such as bits. Early telephone systems handled all aspects of the call using signals that could be sent through the same wires that were used to send the voice waveforms. For example, phones ring by having a telephone switch send a higher than normal amount of electricity to the phone, which used to activate an electro-magnetic bell in the phone and made the phone ring. As telephones and telephony systems became more complex, the signals passed on the wire did as well. The signaling data eventually had to move off the voice transmission lines, and onto separate lines for management, which is called out-of-band signaling. Large phone switches would communicate things like timing, line availability, and other management information over out-of-band signaling. In 1984 Friedhelm Hillerbrand and Bernard Ghillebaert realized that this signaling traffic wasn’t always being used (Kuerbis, van Stolk-Cooke, & Muench, 2017). They developed a way to send ASCI characters along the signaling lines when they were not being used, which let them send text messages from phone switches to end users. The signaling formats that would send the messages could only support messages of 128 characters at a time, and both Hillerbrand and Ghillebaert believed that end users would only be able to acknowledge the message. Global System for Mobile communications (GSM) met in 1985 and started the process of developing the concepts behind Short Message Service (SMS), which is the standard used to send text messages. Since all phone traffic requires signaling data, providers could give text message access to their customers while incurring almost no cost developing or implementing the service. They were simply using a resource they already had in a different way.

SMS allowed for broadcasts to phone, like Hillerbrand and Ghillebaert first envisioned, but also point to point messaging between phones. SMS messaging was first commercialized by Nokia in 1994 and gained popularity with the advent of smart phones like the iPhone. In 1999 text messaging between networks became possible and SMS messaging dramatically increased. The average mobile phone user sent about 35 text messages a month in 2000, by 2010 200,000 text messages were being sent every minute, and over 6.1 trillion texts were sent that year(Steeh, Buskirk, & Callegaro, 2007)! Text messaging packages with cellphones started as an expensive perk and are now a necessity for any phone plan. Text messaging remains one of the largest examples of companies charging customers premium prices for a service that cost them almost nothing to implement. It has also become the de facto way to communicate today and it was never intended for that use!

References:
Bort, J. (1995). The address mess. InfoWorld, 17(46), 75.

Kuerbis, A., van Stolk-Cooke, K., & Muench, F. (2017). An exploratory study of mobile messaging preferences by age: Middle-aged and older adults compared to younger adults. Journal of Rehabilitation and Assistive Technologies Engineering, 4, 2055668317733257. doi:10.1177/2055668317733257

Loshin, P. (2001). Network address translation. Computerworld, 35(8), 60.

Steeh, C., Buskirk, T. D., & Callegaro, M. (2007). Using Text Messages in U.S. Mobile Phone Surveys. Field Methods, 19(1), 59-75. doi:10.1177/1525822x06292852

Trowbridge, D. (1997). A natty solution to a knotty problem. Computer Technology Review, 17(2), 1-1,6+.


Wednesday, July 18, 2018

Decision Making Techniques


Futuring and Innovation
CS875-1803C-01
Benjamin Arnold
Professor:  Rhonda Johnson

Most projects involve group decision making at some level. Most of the time, this is handled in an informal way through meetings or email correspondence. While an informal approach may work for some groups, others groups may benefit from a more structured approach to group decision making. The Delphi technique is a group decision-making method named after the Oracle of Delphi who was a mythical Greek fortune teller. The Delphi technique was developed in 1959 by Olaf Helmer, Norman Dalkey, and Nicholas Rescher, for the RAND Corporation. The Delphi technique uses anonymous input and structured flow of information between participants to protect against things like personal bias or a bandwagon effect of a specific idea to supplant other valid information (Helmer-Hirschberg, 1967). Experts in a given field are provided questionnaires that are designed to capture the expert’s information and opinions about an issue. The opinions are converted into an approach to address the issue. That approach is refined with continuous, anonymous feedback as the process continues.

Another group decision-making technique is Forced Ranking, which is also known as the Kepner-Tregoe Decision Matrix. Forced Ranking is a decision making technique that uses a decision matrix to force a ranking among possible alternative solutions. In this technique, several possible solutions for an issue are identified. Those solutions are then scored using a weighted value that is determined using a decision matrix. The decision matrix lists various criteria that can be used to determine important factors that are addressed by each of the proposed solutions. The factors are ranked in importance be being given a weight. Then each solution is given a rating as to how well it addresses each of the issue criteria. The weighted rating is the weight value times the rating value for each criterion (Welch, 2002). All weighted ratings are tallied and the solution with the highest culminate weighted score is determined to be the best solution for the issue. This technique also attempts to eliminate bias and possible bandwagon effects by separating the participants from the solution. The weighted ranking systems add objective metrics to possibly subjective criteria (Bens, 2005). Also, having the participants break downgrade different aspects of a solution instead of the solution as a whole provides a method where the solutions are looked at objectively and with more rigor by each participant.

A third approach is the OODA loop. OODA stands for observe, orient, decide, and act. It is a decision cycle developed by Col John Boyd for the United States Air Force to think about conflict situations. Boyd believed that since reality was always changing, any model of that reality would also have to constantly change. This model is like the other two examples in that it attempts to bring order to unknown variables. The first part of the OODA loop is to observe these changes. The concept is to be in a constant state of readiness to adapt to changes. The second stage is orientation, which is perhaps the most critical step in the process. Orientation means bringing the observations to bear and processing that information efficiently to prepare to decide (Enck, 2012). Boyd suggested that having a robust background of several disciplines would be an advantage at this stage. In a group setting, this is where the group would call on the individual expertise represented in the group to successfully process the information that was observed. The last two stages are relatively straightforward. Decide on a course of action and then act on it. Three key takeaway from these stages is to commit to the decision, complete the action, and ensure that any feedback information is retained when restarting the loop. A unique factor about implementing an OODA Loop is that it is ineffective if not used constantly. A team cannot start up an OODA Loop process for a single situation and then stop after the first action. The OODA Loop process works best when it is more a constant state the team is in and is always being practiced. After every action, observation continues.

References

Helmer-Hirschberg, Olaf,Analysis of the Future: The Delphi Method, Santa Monica, Calif.: RAND
Corporation, P-3558, 1967. As of July 18, 2018:https://www.rand.org/pubs/papers/P3558.html

David A. Welch, Decisions, Decisions - The art of effective decision making. Prometheus Books, 2002 (ISBN 1-57392-934-4)

Ingrid Bens, Facilitating with Ease - Core skills for facilitators, team leaders and members, managers,
 consultants and trainers. Jossey-Bass, 2005 (ISBN 0-7879-7729-2)

Enck, R. E. (2012). The OODA Loop. Home Health Care Management & Practice, 24(3), 123-124.
doi:10.1177/1084822312439314


Code Literacy - Horizon Report for Higher Education



The 2017 Horizon Report for Higher Education addresses the subject of Coding as a Literacy as a key short-term trend in K-12 education. Short term trends are described as technology adoptions that are trending within a one to the two-year time frame. The report states that computer science is currently one of the fastest growing industries and that coding literacy is quickly becoming a critical skill across many different career fields, to include many non-technical fields. The report states that many education programs around the world are including basic coding in their curriculum (Freeman, Adams, Cummins, 2017). I found this article interesting because I regularly volunteer for several programs in and around Texas that provide coding camps for children. One program called Youth Code Jam focuses on teaching computer science concepts to children with mild learning disabilities like autism but is also open to anyone who wishes to attend. As a parent of two school-aged children and as a regular volunteer, I do not believe that our public education system is focusing enough effort on teaching code to children. I believe this is due to several forces that negatively impact adopting coding literacy as a priority in the school curriculum. Teachers need to be educated in coding themselves, the cost of resources to teach coding needs to be addressed, and the availability of resources to teach coding is also a challenge.

I believe that there are current technologies that can successfully address each of these issues. To start with the issue of cost, we often use Raspberry Pi mini-computers as platforms for many of our Youth Code Jam events. Raspberry Pi’s are small single board computers with video and audio output, USB and network interfaces, and Wi-Fi capability. The computers are very inexpensive and make great platforms to teach coding literacy on. There are many different projects and applications that students can attempt and learn from. In my daughter’s public school, I am trying to start a program where each student is given a Raspberry Pi as a personal coding platform. The Pi is small enough that they can carry it in their backpack and take it to classes with them. Each class then just needs monitors, keyboards, and Wi-Fi access for the students to use the Pi to access the internet and learn to code. This leads to the second solution of availability. There are several amazing resources online that teach coding at a beginner level. The Horizon report has a link to Common Sense Education that lists the 29 highest rated online resources for learning code. Sites like Code.org and Code Academy provide lessons in many popular programming languages where students can work through lessons at their own pace and be graded on their progress (Code, 2017). These online resources address the last issue of teacher education. I believe that allowing teachers to use these online resources as part of a formal curriculum would alleviate some of the burden of them becoming proficient at coding before they can properly educate their students (Learn, 2017). With online resources like Code Academy, the teachers can learn alongside their students and can rely on the lessons that have already been created for the site instead of having to build lesson plans for a subject that they are still learning themselves.

https://upload.wikimedia.org/wikipedia/commons/thumb/9/97/Raspberry_Pi_3_B%2B_%2839906369025%29.png/300px-Raspberry_Pi_3_B%2B_%2839906369025%29.png

 
References:

Freeman, A., Adams Becker, S., Cummins, M., Davis, A., and Hall Giesinger, C. (2017). NMC/CoSN
Horizon Report: 2017 K–12 Edition. Austin, Texas: The New Media Consortium.

Code.org: Anybody can Learn. (2017). Retrieved July 18, 2018, from http://code.org


Learn to code - for free. (2017). Retrieved July 18, 2018, from http://codeacademy.com



Sunday, July 15, 2018

First Post


Hi, this is the first post for my blog for the class Futuring and Innovation (CS875-1803C-01) at Colorado Technical University.  I'm working towards getting a Doctor of Computer Science degree and I'm currently in the third quarter of my first year. I'm excited for the challenges that this course and this degree have to offer!

Throughout my career, I have tried to be in a state of constant improvement and learning. I believe that it is an essential part of being a computer scientist to always have the attitude of a student. With technology changing daily, it is imperative that computer scientists stay up to date with current innovations, so they can apply them in their work.

I started my career as a Russian linguist for the US Air Force. Once I realized that I truly did not want to do that job for a living, I transferred to be a communications specialist, which lead me to being a Systems Administrator at the headquarters for the Air Intelligence Agency. This job allowed me to find a rather small and select office to work in, that was working with offensive and defensive cyber weapons. In 2005 I completed my enlistment and was hired on as a contractor with Northrop Grumman to work in the same office. I worked my way up the ranks and was recruited by MITRE to be lead systems engineer for offensive cyber development operations. MITRE provides full education benefits for their employees. I have already received my bachelor’s in computer science and my master’s in information assurance and network defense. I signed up for my doctorate a week after I was hired by MITRE!

I'm very excited to complete this degree. A doctorate in computer science will allow me to move up within my company, and with my government sponsors. I was really excited to start Futuring and Innovation and I believe this will be one of my favorite classes in my degree. I would like to focus this blog on interesting information that I find that could be used to further my understanding of my research topic. I'm currently working with various DoD organizations to implement agile development methodologies into cyber weapon development. Most government regulations are built around a traditional waterfall development approach, and I am working to change that. The challenge is to find a way to incorporate stringent government oversight into a development method that was designed to remove unnecessary oversight, review, and documentation from the development process. So far it has been an uphill battle, but I am making progress. At this time, I've published two papers on this subject and I've developed a strategy that has been accepted for use on one of our programs. With luck, I can use the information I learn in this class to help with my work and use the lessons I'm learning at work to help guide some of the research I conduct for this class!

I'm also an avid tinkerer, and I will always jump on an opportunity to automate something! I have a house full of Raspberry Pi's that open my garage doors and turn on my lights! I fly drones whenever I can, and I make terrible Python code in rube goldberg-esque attempts to solve simple problems with Arduino boards! I'm excited at the possibility of finding some like-minded students that will geek out with me! Thanks for reading!

~ Ben