Here is a small video I made using Animoto as an example of what a quick commercial for VirtuaPro may look like. The commercial would be targeted toward homeowners who engage in multiple home repair projects. Just a quick disclaimer, VirtuaPro does not exist and I don't own any of the images, video, or audio used in this clip. This was made for educational purposes only.
Tuesday, September 4, 2018
Wednesday, August 29, 2018
My sociotechnical plan
involves using augmented reality as a learning tool. The concept involves
developing a device that can provide instructions for a variaty of physical
tasks. The device would be a headset like Microsoft Hololens that provides an
augmented reality overlay that the user will see over their natural surrounding.
The headset can track the users hands and movement and also see the users
environment. Using this data, the system provides realtime instructions on how
to perform various tasks like performing electrical or plumming work in the
users house, or performing auto repair. The system would have tremendous
commercial and personal uses from teaching homeowners how to fix a broken
dishwasher, to guiding surgons during operations. A major impact that could
result from the implementation of this type of device would be pulic concern
over privacy. An example of how this may develop can be observed witht the
history of Google Glass.
Google announced Project
Glass in April of 2012 (McGee, 2015). Project Glass was name for the division at Google responsible for
developing Google Glass, a wearable augmented reality device. Google Glass is a
wearable computer that powers a small display window in the upper right or left
corner of the users vision. The display can provide real time information based
on several different factors like the users position, input from their phone,
or even what they are looking at. Google Glass is equipped with a camera that
records video and images that can be passed to Google search engines, captured
on the users phone or cloud service, or streamed live over several social media
applications. This is one of the aspects of Google Glass that lead to it being
essentially abandoned as a product by Google.
When it was announced,
Google demonstrated how Google Glass could instantly capture what the user was
seeing by giving demos of skydivers, athletes, and regular users capturing
their activites and live streaming them to You Tube, Twitter, and other social
media sites. As Google Glass prototypes started being issued to Google
employees for testing, growing public concern about privacy grew around the use
of the device ("Global data protection authorities tackle
Google on Glass privacy," 2013). People wanted to know if the system was always
recording, or how they would know if a Glass users was recording them or
sharing photos of them. People didn’t like the idea of possibly always being
recorded.
One of the privacy
issues with Google Glass was how it is engineered. The computer on the device
needs to be small so it can be comfortablly worn, and it must offload
processing to accomidate for being so small, and to reduce battery drain (Claburn,
2012). This means that the device
must offload any recorded image or video to provide most of its augmentation
features. Google Glass was being developed in a market where cell phone cameras
were becoming the main way people recorded video and took pictures, and where debates
about the ethicacy of recording strangers was a concern for many people. Google
wasn’t very clear on when Google Glass would be recording or listening, and if
users would even know if it was. Google wasn’t helped by reviewers wearing
Google Glass in the shower, forgetting to take the devices off when entering
restrooms and other private areas, and not being clear on when the device would
send data back to Google servers. In the end, Google abandoned the project in
January of 2015. They have since updated the software twice in 2017, but there
is currently no commercial way to purchase Google Glass.
Using the history of
Google Glass as an example of public reaction to privacy concerns dealing with
augmented reality, I believe that concerns about privacy when using an
augmented reality device can be broken down into two categories. The first is
how the system processes and manages images, video, and sound. The second is
the public understanding and perception of the use of that data. For the first
category, I believe that we are seeing more devices being cloud enabled, and
that this is a trend that is going to continue in the future. While it could be argued that having a system
that performs processing of recorded data locally would be more secure, I do not
believe that this is a viable technical solution, and it is not the way that
technology is trending toward. For the second category, public opinion about
privacy may be changing. While Google Glass was generally rejected by the
public over privacy concerns, devices like the Amazon Echo have become very
popular. I believe that this is due to the Echo being in a persons home, and
not a public space. Since my device would be used in a home or place of business, I think that this may change the way it is perceived as far as
privacy is concerned. Each of these categories would require a large amount of
research to be properly explored and could possibly be great dissertation topics
on their own!
~ Ben
References:
Claburn, T. (2012). 7
potential problems with google's glasses. Informationweek - Online, Retrieved
from https://proxy.cecybrary.com/login?url=https://search-proquest-com.proxy.cecybrary.com
/docview/922740501?accountid=144789
Global data protection authorities tackle
Google on Glass privacy. (2013). Biometric
Technology Today, 2013(7), 1-3. doi:https://doi.org/10.1016/S0969-4765(13)70116-4
McGee, Matt, (2015) The History of Google
Glass. Retrieved from http://glassalmanac.com/history-
google-glass/
Wednesday, August 22, 2018
Serendipity and Smart Dust
Exaptation is the when an innovation that was originally designed for one purpose, is redesigned for another. A good example of this is the invention of air conditioning. Willis Carrier was trying to remove humidity from a lithographing office in 1902 (Lester, 2015). His invention was able to increase or decrease the humidity in a room and it had the additional benefit of cooling the air in room as well.
Discovery by error is likely the most common form of accidental invention. This is when a mistake during research or development returns a positive result. During software design, this is usually called turning a bug into a feature. For example, Gmail originally had a 5 second delay when processing email. Instead of fixing the delay, developers added an ‘undo’ button that would stop the email from being sent (Leggett, 2009). This way, the error in processing the email message turned into a feature that allowed users to quickly call back an email after hitting send.
Serendipity occurs when a positive outcome is the result of a chance event. A good example of this could be getting lost but finding a great new restaurant or book store where you end up. You weren’t intending to look for a restaurant, but because of going the wrong way you found it.
In October of 2003, a graduate student at the University of California in San Diego won $50,000 as the grand prize in the Collegiate Inventors Competition for her invention of ‘smart dust’. Smart dust consists of silicon partials that can be used to detect a variety of biological and chemical agents in different media (Link, 2005). Since its invention in 2003, there have been many more proposed and applied applications for smart dust. Along with sensing the molecular structure of different objects, smart dust can be used to sense minute levels of light as well. Smart dust is being adapted to carry signals, which could result in things like wireless sensor nodes that are a cubic millimeter in size. Jamie Link was in the process of making thin multi-layer porous silicon when the silicon chip snapped. The accident released small amounts of silicon dust that held the same properties as the chip. This serendipitous event brought about this invention that has a wide range of uses in medical and environmental diagnostics and research.
References:
Leggett, M. (2009, March 19). New in Labs: Undo Send. Retrieved August 22, 2018, from
https://gmail.googleblog.com/2009/03/new-in-labs-undo-send.html
Lester, P. (2015, July 20). History of Air Conditioning. Retrieved August 22, 2018, from
https://www.energy.gov/articles/history-air-conditioning
Link, J. R. (2005). Spectrally encoded porous silicon “smart dust” for chemical and biological sensing applications. (3171107 Ph.D.), University of California, San Diego, Ann Arbor. ProQuest Dissertations & Theses Global database.
Sunday, August 19, 2018
Forecasting Piracy
Forecasting is the act of predicting future trends based on
past events. Typically, forecasting is used when predicting the weather. Meteorologist
use past weather phenomenon as indicators on what future weather may be like. People
use forecasting in almost every aspect of our lives. If you drank too much at
last years Christmas party and embarrassed yourself, you may drink less at this
year’s party, so you don’t suffer the same fate. What is essentially happening
is that you can forecast the results of drinking too much at the party, and you
adjust your actions to avoid that prediction. The concept behind this is that
the future is relatively predictable, and events will tend to repeat
themselves. Unfortunately, this isn’t always the case, and traditional
forecasting could lead us to make the wrong decisions. Therefore, planning
outside of forecasting should be implemented. Scenario planning is a different
method for planning for future events. Scenario planning builds sets of likely
events and then builds plans to respond to those likely events. The core
concept behind scenario planning is answering the question ‘what if’ (Chermack, 2004). Scenario planning provides a benefit of
allowing for the inclusion of possible events and agents of change that may be
new or previously unrelated to our forecasting efforts. A good example on the differences
between forecasting and scenario planning, and how this may affect business can
be found by researching the music industries response to digital music and
piracy in the late 1990’s.
For many years, the music industry sold music in albums. If
someone wanted to listen to their favorite song, they had to buy the whole album.
Sometimes songs were so popular that they’d be released as a single for less
money, but it was often the case that the album had to be purchased. Piracy of
music existed by creating illegal copies of these albums, first with devices
that could press copies of records, to cassette tape recorders, and then
digital compact disk (CD) writers. The music industry would combat these forms
of piracy, as they saw each copy as a lost sale (Marshall, 2004). The years of music sales and distribution locked
the music industry into forecasting the same actions to yield the same profits and
success that they were used to. This all changed with the rise of digital piracy.
The music industry knew that most people would prefer to purchase individual
songs, but the music industry made more money when they purchased albums. This
is the reason why singles weren’t as popular, not because they wouldn’t sell
but because they weren’t as profitable. When music changed to a digital format,
it was much easier for people to pull the individual songs they wanted and to
share them between computers over the internet. Figure 1 shows how digital
single downloads dramatically increased over physical CD sales and even full
album downloads as digital music became more accessible over time. The first mainstream
music piracy application called Napster, made digital music piracy easily accessible
for people with only a moderate amount of technical knowledge. The music
industry reacted to digital piracy the same way they reacted to record pressing
machines in the 1950’s. They condemned the practice and raised prices in part citing
lost sales due to piracy. According to their forecasting models, this was the
tried and true response. The music industry had more than enough information
and time to capitalize on this new distribution method but failed to do it due
to poor forecasting. Instead of adopting and commercializing sharing apps like
Napster, they fought them only to have hundreds of copycat programs replace the
few that started. By the time the music industry decided that digital downloads
were a permanent change in music distribution, the illegal methods of doing so
were so refined and easy to use that they couldn’t create a system that was
preferable to piracy.
Figure 1: Music Sale Trends (Rocket, 2018)
The music industry failed to capitalize on one of the biggest
softball opportunities presented in any industry. They had hundreds of
thousands of digital products that were in high demand. They had a model system
in Napster in how to distribute their products, and they had a trend in
technology that supported this new business model. The deck couldn’t be stacked
more favorably in their favor, and they blew it. Instead of realizing that the
market was changing and conducting any predictive modeling or scenario planning,
the music industry stuck to their guns. They pushed legal action against pirates,
they increased costs of physical media, and reduced access to single song digital
download. In 2002, the music industry was on the verge of collapse. Consumers
weren’t interested in buying the physical CD’s they were selling, and they had
failed to adopt a distribution system that provided digital single song access
like piracy applications had been doing for the previous few years. It wasn’t
until Apple Itunes, Google Music, and other digital purchase and streaming
sites became available that this trend started to reverse. Figure 2 shows that
digital single song sales far surpass all physical sales previously recorded.
Figure 2: Digital Downloads (Cumberland, 2013)
Its very possible that the music industry could have capitalized
on the changes in music distribution and customer demands if they would have
conducted scenario planning to complement their forecasting. Scenario planning
could have answered the ‘what if’ questions that could have allowed the music
industry to have an adaptive strategy to embrace digital music access like iTunes
and Google music did earlier (Marshall, 2004). The shift to digital streaming has reduced major
music labels influence on the industry away from them and towards providers
like Apple and Google. Since the need to produce physical copies of digital
media has almost been eliminated, small producers can go direct to distributers
like Google and get their music direct to the customers. The failure of the music
industry to plan for this scenario resulted in them losing their hold on the
music industry. Scenario planning can account for the social impact of change
and build possible responses that can account for those changes. In the case of
the music industry, scenario planning could have been the answer to keep the music
industry in the same dominant position while adapting the social changes in digital
music consumption.
References:
Chermack,
T. J. (2004). A Theoretical Model of Scenario Planning. Human Resource
Development Review, 3(4), 301-325. doi:10.1177/1534484304270637
Cumberland, R. (2013, June 13). The new music business model
how did the industry get here and what's next? Retrieved August 19, 2018, from
https://www.bemuso.com/articles/thenewmusicbizmodel.html
Marshall, L. (2004). The Effects of Piracy Upon the Music
Industry: a Case Study of Bootlegging. Media, Culture & Society, 26(2),
163-181. doi:10.1177/0163443704039497
Music Industry Sales, Piracy and Illegal Downloads – Better
or Same? (2013, July 03). Retrieved August 19, 2018, from
http://www.rockitboy.com/blogs/music-industry-sales-piracy-and-illegal-downloads-better-or-same/
Saturday, August 4, 2018
Traditional Forecasting vs Scenario Planning
Traditional forecasting:
Forecasting is one of the steps that is taken when planning for the future. It is a process of using current and past information to attempt to predict what may happen in the future. A family planning for a trip may use the amount of money spent of food for previous trips to forecast how much they may spend during the future trips. Meteorologists forecast future weather events in part by using data of past events. Traditional forecasting encompasses commonly used forecasting methods like the naïve forecasting method, casual forecasting, and the Delphi method of forecasting. While these approaches vary, each one uses past data in some way to attempt to predict future events (Porter, 2011). For example, casual forecasting attempts to use related data to predict a future event. If a movie has high ticket sales, we can assume that the action figure toys for that movie will sell well. The Delphi method uses the opinions of experts of past events to predict future events. A drawback to this type of forecasting is that it only prepares the participants for events that have already been observed, since the data is based primarily off past data. Using the meteorologist example, if data is coming in that the meteorologist doesn’t have a model to base their forecasting from, they can’t predict the weather. This is usually parodied in movies during a cataclysmic event when the resident expert is asked what is going to happen and turns to the camera and says “I don’t know!” The next time that happens, we now know to scream out in the theater “Traditional forecasting does not take previously unobserved phenomena into account during analysis!!” I strongly recommend not doing this.
Scenario Planning:
Scenario planning is a unique approach to predicting future events and trends. Instead of using past data to attempt to predict future events, scenarios are developed that represent what may happen in the future. Those scenarios are played out to their logical conclusions, and decisions are made based on the results. A good example would be when planning a fire escape plan for a building. The planners can develop scenarios based on where the fire may be. For example, if the fire is near the main escape path from the building, the planners can work through that scenario and determine an alternate path out (Wade, 2012). The strength of this approach is that a scenario can be determined from what could possibly happen, instead of what has happened. Using another Hollywood example, in the movie World War Z a plague sweeps across the world and only Israel is prepared. In the movie, the Israeli government identified this possible scenario early and planned through the logical conclusion. Using traditional forecasting, this would probably not be the results since this outbreak was the first of its kind. A weakness for scenario planning is the opposite of traditional forecasting to where it is not based on past data and can be wildly subjective and miss the mark of what really happens when the scenario plays out. Scenario planning also breaks down in planning for short term or specific planning (Coates, 2016). In the book Foundation by Issac Asimov (1951), humanity builds a massive super computer that can use traditional forecasting to predict the future for several thousand years. Throughout the book, we find that the scenarios built by the supercomputer during a massive war were way off when planning short term actions, and the techs monitoring the system added changes whenever they thought the system was wrong. Although fictional, this is a good example of this flaw with scenario planning.
References
Wade, W. (2012). Scenario Planning: A Field Guide to the Future. John Wiley & Sons.
Porter, A. (2011). Forecasting and Management of Technology, Second Edition. John Wiley & Sons.
Asimov, I. (1951). Foundation. Gnome Press.
Coates, J. F. (2016). Scenario planning. Technological Forecasting and Social Change, 113, 99-102.
Sunday, July 29, 2018
Innovation by Accident
Not every product we used today was planned out perfectly
from the start. It is often the case that products are discovered accidently,
or the way they are used commercially have nothing to do with the original intent
of the products creator. Famous examples of this are things like the slinky
toy, x-rays, and the microwave. Each of these inventions were created during
research and engineering efforts to solve other problems. The slinky came into existence
when an engineer dropped a large industrial spring, x-ray imaging and the
microwave were both invented while researching radio waves. There are other
instances where a byproduct or quick fix action ends up becoming a prominent feature
of a system. Two technologies that fit this description are Network Address
Translation (NAT) and Short Message Service (SMS) text messaging.
Network Address Translation
To understand the accidental impact of NAT, we must first briefly
review the history of the Internet. The Internet that we use today was first
developed as a communications network for the United States military. The Advanced
Research Projects Agency (ARPA) built ARPAnet in 1969 as a way to connect military
mainframe computers together. The original addressing scheme for ARPAnet was 8
bit and allowed for 256 different host addresses. The original ARPAnet started
with 4 hosts and quickly grew to 213 hosts by 1981 (Bort, 1995).
Realizing the limitations of the ARPAnet addressing scheme, Robert Khan and
Vinton Cerf started working on a new addressing scheme. The fourth iteration of
their work produced IPv4 addressing. IP stands for Internet Protocol and v4 is
the fourth version they created. IPv4 is a series of 4 8 bit addresses that can
support 4,294,967,296 unique addresses (Bort, 1995).
This version was a tremendous upgrade from the original 256 possible supported
hosts, but due to the rapid expansion of the Internet in the 1990’s, the mismanagement
of address allocation, and other technical issues with routing and traffic
management, the Internet was facing a real problem or running out of space that
needed to be addressed. IPv4 address space was very large and there needed to
be a way to route traffic to the appropriate networks around the Internet. The original
solution was to build classes into the address space. IPv4 classes are simply a
way to identify the size of a network based on the first 8-bit value in the
address. The classes were broken up into 4 classes:
·
Class A –
o
First bit value of 0 – 127
o
126 networks with 16,777,214 hosts each
·
Class B –
o
First bit value of 128 – 191
o
16,384 networks with 65,534 hosts each
·
Class C –
o
First bit value of 192 – 223
o
2,097,152 networks with 254 hosts each
·
Class D & E –
o
First bit value of 224 – 255
o
Used for multicasting and R&D
This wasn’t a perfect solution because the amount of
addresses per class were not scalable. For example, there are only 126 class A
networks that could be given out, and each one had almost 17 million usable
hosts. There are over 2 million class C networks that can be given out, but
each one only has 254 available hosts. During the 1990’s, many large companies were
given class A networks and didn’t use anywhere near the amount of hosts they
had available, so those addresses were essentially lost. Meanwhile, smaller
companies grew and required more and more class C networks as their demand for
hosts increased. The class C networks weren’t given out in sequence, so
companies had network addresses that weren’t mathematically close to each other,
which increased the difficulty of routing Internet traffic. The problem was
quickly becoming unmanageable.
IPv6 was developed to expand the total address space for the
Internet. It could support 340,282,366,920,938,463,463,374,607,431,768,211,456
total addresses (Loshin, 2001)!
The problem was that IPv6 wasn’t drafted until 1998 and it was taking much too
long to become a standard. A temporary solution needed to be found. NAT was developed
to temporarily solve the issue with lack of address space but ended up solving
many other issues that it essentially delayed the IPv6 rollout of IPv6 for
almost 14 years! NAT is a protocol that runs on a router that borders an
internal network and the Internet (Trowbridge, 1997). What NAT does is simply translate IP addresses
on the internal network with IP addresses being used on the Internet. The
feature that makes NAT so useful is that this translation can be one to many. This
means that an organization can host multiple systems internally while only
using one address to access the Internet. NAT adds information into the header information
of network traffic that is used to assign that traffic an internal and external
IP address to use. This way one external IP address can be used to provide
Internet connectivity to multiple hosts. NAT inadvertently solved many of the
problems with IPv4. Since IPv4 addresses could be reused internally, corporations
only needed a few valid IP addresses to provide Internet connectivity to all their
hosts. NAT averted the risk of running out of addresses so successfully that
IPv6 could be delayed for years with almost no repercussions. This allowed IPv6
development to continue and provided a very robust addressing solution that should
allow for sustainable address space for years to come. There were several other
factors that dealt with IP addressing that contributed and augmented NAT, such
as Classless inter-domain routing, that can be explored to provide a more detailed
picture as to how NAT helped change the way the internet worked.
Text Messages
Text messaging has quickly become the standard way of communicating
with mobile phones over the last 15 years, and it wasn’t a feature that was
planned to be used for commercial use at all. Telephone lines have historically
been analog systems, which means that they use waveforms to transmit voice and data
instead of digital data such as bits. Early telephone systems handled all
aspects of the call using signals that could be sent through the same wires that
were used to send the voice waveforms. For example, phones ring by having a
telephone switch send a higher than normal amount of electricity to the phone, which
used to activate an electro-magnetic bell in the phone and made the phone ring.
As telephones and telephony systems became more complex, the signals passed on
the wire did as well. The signaling data eventually had to move off the voice
transmission lines, and onto separate lines for management, which is called
out-of-band signaling. Large phone switches would communicate things like
timing, line availability, and other management information over out-of-band
signaling. In 1984 Friedhelm Hillerbrand and Bernard Ghillebaert realized that
this signaling traffic wasn’t always being used (Kuerbis, van Stolk-Cooke, & Muench, 2017). They developed a way to send
ASCI characters along the signaling lines when they were not being used, which
let them send text messages from phone switches to end users. The signaling formats
that would send the messages could only support messages of 128 characters at a
time, and both Hillerbrand and Ghillebaert believed that end users would only
be able to acknowledge the message. Global System for Mobile communications
(GSM) met in 1985 and started the process of developing the concepts behind Short
Message Service (SMS), which is the standard used to send text messages. Since
all phone traffic requires signaling data, providers could give text message
access to their customers while incurring almost no cost developing or
implementing the service. They were simply using a resource they already had in
a different way.
SMS allowed for broadcasts to phone, like Hillerbrand and
Ghillebaert first envisioned, but also point to point messaging between phones.
SMS messaging was first commercialized by Nokia in 1994 and gained popularity
with the advent of smart phones like the iPhone. In 1999 text messaging between
networks became possible and SMS messaging dramatically increased. The average
mobile phone user sent about 35 text messages a month in 2000, by 2010 200,000
text messages were being sent every minute, and over 6.1 trillion texts were sent
that year(Steeh, Buskirk, & Callegaro, 2007)! Text messaging packages with
cellphones started as an expensive perk and are now a necessity for any phone
plan. Text messaging remains one of the largest examples of companies charging
customers premium prices for a service that cost them almost nothing to implement.
It has also become the de facto way to communicate today and it was never intended
for that use!
References:
Bort, J. (1995). The address mess. InfoWorld, 17(46), 75.
Kuerbis,
A., van Stolk-Cooke, K., & Muench, F. (2017). An exploratory study of
mobile messaging preferences by age: Middle-aged and older adults compared to
younger adults. Journal of Rehabilitation
and Assistive Technologies Engineering, 4, 2055668317733257. doi:10.1177/2055668317733257
Loshin,
P. (2001). Network address translation. Computerworld,
35(8), 60.
Steeh,
C., Buskirk, T. D., & Callegaro, M. (2007). Using Text Messages in U.S.
Mobile Phone Surveys. Field Methods, 19(1),
59-75. doi:10.1177/1525822x06292852
Trowbridge,
D. (1997). A natty solution to a knotty problem. Computer Technology Review, 17(2), 1-1,6+.
Wednesday, July 18, 2018
Decision Making Techniques
Futuring and Innovation
CS875-1803C-01
Benjamin Arnold
Professor: Rhonda
Johnson
Most projects involve group decision making at some
level. Most of the time, this is handled in an informal way through meetings or
email correspondence. While an informal approach may work for some groups,
others groups may benefit from a more structured approach to group decision
making. The Delphi technique is a group decision-making
method named after the Oracle of Delphi who was a mythical Greek fortune
teller. The Delphi technique was developed in 1959 by Olaf Helmer, Norman
Dalkey, and Nicholas Rescher, for the RAND Corporation. The Delphi technique
uses anonymous input and structured flow of information between participants to
protect against things like personal bias or a bandwagon effect of a specific
idea to supplant other valid information (Helmer-Hirschberg, 1967). Experts in a given field are provided
questionnaires that are designed to capture the expert’s information and
opinions about an issue. The opinions are converted into an approach to address
the issue. That approach is refined with continuous, anonymous feedback as the
process continues.
Another group decision-making technique is Forced
Ranking, which is also known as the Kepner-Tregoe Decision Matrix. Forced
Ranking is a decision making technique that uses a decision matrix to force a
ranking among possible alternative solutions. In this technique, several
possible solutions for an issue are identified. Those solutions are then scored
using a weighted value that is determined using a decision matrix. The decision
matrix lists various criteria that can be used to determine important factors
that are addressed by each of the proposed solutions. The factors are ranked in
importance be being given a weight. Then each solution is given a rating as to
how well it addresses each of the issue criteria. The weighted rating is the
weight value times the rating value for each criterion (Welch, 2002). All
weighted ratings are tallied and the solution with the highest culminate
weighted score is determined to be the best solution for the issue. This
technique also attempts to eliminate bias and possible bandwagon effects by separating the participants from the solution.
The weighted ranking systems add objective metrics to possibly subjective
criteria (Bens, 2005). Also, having the participants break downgrade different aspects of a solution
instead of the solution as a whole provides a method where the solutions are
looked at objectively and with more rigor by each
participant.
A third approach is the OODA loop. OODA stands for observe, orient, decide, and act. It is a
decision cycle developed by Col John Boyd for the United States Air Force to
think about conflict situations. Boyd believed that since reality was always changing,
any model of that reality would also have to constantly change. This model is like
the other two examples in that it attempts to bring order to unknown variables.
The first part of the OODA loop is to observe these changes. The concept is to
be in a constant state of readiness to adapt to changes. The second stage is
orientation, which is perhaps the most critical step in the process.
Orientation means bringing the observations to bear and processing that
information efficiently to prepare to decide (Enck, 2012). Boyd suggested
that having a robust background of several disciplines would be an advantage at
this stage. In a group setting, this is where the group would call on the individual
expertise represented in the group to successfully process the information that
was observed. The last two stages are relatively straightforward. Decide on a course of action and then act on it. Three
key takeaway from these stages is to commit to the decision, complete the
action, and ensure that any feedback information is retained when restarting
the loop. A unique factor about implementing an OODA Loop is that it is ineffective
if not used constantly. A team cannot start up an OODA Loop process for a
single situation and then stop after the first action. The OODA Loop process
works best when it is more a constant state the team is in and is always being
practiced. After every action, observation continues.
References
Helmer-Hirschberg, Olaf,Analysis of the Future:
The Delphi Method, Santa Monica, Calif.: RAND
Corporation,
P-3558, 1967. As of July 18, 2018:https://www.rand.org/pubs/papers/P3558.html
David A. Welch, Decisions, Decisions - The art of effective
decision making. Prometheus Books, 2002 (ISBN 1-57392-934-4)
Ingrid Bens, Facilitating with Ease - Core skills for
facilitators, team leaders and members, managers,
consultants and trainers. Jossey-Bass, 2005 (ISBN
0-7879-7729-2)
Enck,
R. E. (2012). The OODA Loop. Home Health
Care Management & Practice, 24(3), 123-124.
doi:10.1177/1084822312439314
Code Literacy - Horizon Report for Higher Education
The 2017 Horizon Report for Higher Education addresses
the subject of Coding as a Literacy as a key short-term trend in K-12
education. Short term trends are described as technology adoptions that are
trending within a one to the two-year
time frame. The report states that computer science is currently one of the
fastest growing industries and that coding literacy is quickly becoming a
critical skill across many different career fields, to include many
non-technical fields. The report states that many education programs around the
world are including basic coding in their
curriculum (Freeman, Adams, Cummins, 2017). I found this article interesting
because I regularly volunteer for several programs in and around Texas that
provide coding camps for children. One program called Youth Code Jam focuses on
teaching computer science concepts to children with mild learning disabilities
like autism but is also open to anyone who wishes to attend. As a parent of two
school-aged children and as a regular
volunteer, I do not believe that our public education system is focusing enough
effort on teaching code to children. I believe this is due to several forces
that negatively impact adopting coding literacy as a priority in the school curriculum. Teachers need to be educated
in coding themselves, the cost of
resources to teach coding needs to be addressed, and the availability of
resources to teach coding is also a challenge.
I believe that there are current technologies that can
successfully address each of these issues. To start with the issue of cost, we
often use Raspberry Pi mini-computers as
platforms for many of our Youth Code Jam events. Raspberry Pi’s are small
single board computers with video and audio output, USB and network interfaces,
and Wi-Fi capability. The computers are very inexpensive and make great
platforms to teach coding literacy on. There are many different projects and
applications that students can attempt and learn from. In my daughter’s public
school, I am trying to start a program where each student is given a Raspberry
Pi as a personal coding platform. The Pi is small enough that they can carry it
in their backpack and take it to classes with them. Each class then just needs
monitors, keyboards, and Wi-Fi access for the students to use the Pi to access
the internet and learn to code. This leads to the
second solution of availability. There are several amazing resources
online that teach coding at a beginner level. The Horizon report has a link to
Common Sense Education that lists the 29 highest rated online resources for learning code. Sites like Code.org and Code
Academy provide lessons in many popular programming languages where students
can work through lessons at their own pace and be graded on their progress
(Code, 2017). These online resources
address the last issue of teacher education. I believe that allowing teachers
to use these online resources as part of
a formal curriculum would alleviate some of the burden of them becoming
proficient at coding before they can properly educate their students (Learn,
2017). With online resources like Code
Academy, the teachers can learn alongside their students and can rely on the
lessons that have already been created for the site instead of having to build
lesson plans for a subject that they are still learning themselves.
![]() |
https://upload.wikimedia.org/wikipedia/commons/thumb/9/97/Raspberry_Pi_3_B%2B_%2839906369025%29.png/300px-Raspberry_Pi_3_B%2B_%2839906369025%29.png |
References:
Freeman, A.,
Adams Becker, S., Cummins, M., Davis, A., and Hall Giesinger, C. (2017).
NMC/CoSN
Horizon Report: 2017 K–12 Edition. Austin, Texas: The New Media Consortium.
Code.org:
Anybody can Learn. (2017). Retrieved July 18, 2018, from
http://code.org
Learn
to code - for free. (2017). Retrieved July 18, 2018, from
http://codeacademy.com
Sunday, July 15, 2018
First Post
Hi, this is the
first post for my blog for the class Futuring and Innovation (CS875-1803C-01)
at Colorado Technical University. I'm working towards getting a Doctor of
Computer Science degree and I'm currently in the third quarter of my first
year. I'm excited for the challenges that this course and this degree have to
offer!
Throughout my
career, I have tried to be in a state of constant improvement and learning. I
believe that it is an essential part of being a computer scientist to always
have the attitude of a student. With technology changing daily, it is
imperative that computer scientists stay up to date with current innovations,
so they can apply them in their work.
I started my career
as a Russian linguist for the US Air Force. Once I realized that I truly did
not want to do that job for a living, I transferred to be a communications
specialist, which lead me to being a Systems Administrator at the headquarters
for the Air Intelligence Agency. This job allowed me to find a rather small and
select office to work in, that was working with offensive and defensive cyber
weapons. In 2005 I completed my enlistment and was hired on as a contractor
with Northrop Grumman to work in the same office. I worked my way up the ranks
and was recruited by MITRE to be lead systems engineer for offensive cyber
development operations. MITRE provides full education benefits for their
employees. I have already received my bachelor’s in computer science and my
master’s in information assurance and network defense. I signed up for my
doctorate a week after I was hired by MITRE!
I'm very excited to
complete this degree. A doctorate in computer science will allow me to move up
within my company, and with my government sponsors. I was really excited to
start Futuring and Innovation and I believe this will be one of my favorite
classes in my degree. I would like to focus this blog on interesting
information that I find that could be used to further my understanding of my research
topic. I'm currently working with various DoD organizations to implement agile
development methodologies into cyber weapon development. Most government
regulations are built around a traditional waterfall development approach, and
I am working to change that. The challenge is to find a way to incorporate
stringent government oversight into a development method that was designed to
remove unnecessary oversight, review, and documentation from the development
process. So far it has been an uphill battle, but I am making progress. At this
time, I've published two papers on this subject and I've developed a strategy
that has been accepted for use on one of our programs. With luck, I can use the
information I learn in this class to help with my work and use the lessons I'm
learning at work to help guide some of the research I conduct for this class!
I'm also an avid
tinkerer, and I will always jump on an opportunity to automate something! I
have a house full of Raspberry Pi's that open my garage doors and turn on my
lights! I fly drones whenever I can, and I make terrible Python code in rube
goldberg-esque attempts to solve simple problems with Arduino boards! I'm
excited at the possibility of finding some like-minded students that will geek
out with me! Thanks for reading!
~ Ben
Subscribe to:
Posts (Atom)