How have computer interfaces developed in the post-war period? What ideas, assumptions and philosophies underlay these developments and how successful was their implementation?
Computer interfaces are the point of interaction and communication between users and computers, and the productivity maximization of this communication constitute one of the most essential problems of computer usage. The optimum optimization is claimed to become easier when the computer conducts effective analysis of the user’s intentions and provides an adequate reaction to them. The history of computer interfaces development is rich in details and interesting, especially in the modern context. What we had in the beginning of computer development and what is available now is so different that building perspectives for the future development would be almost impossible. We still need to take into consideration the earlier progress because anything new would be built upon the older version because technology is not only about introducing innovations but also about the advancement of what already exists. We will review historical periods separately in order to make comprehension easier and divide the ideas, assumptions and philosophies associated with the computer interfaces development and their success.
This essay will specifically focus on graphic computer interfaces because they are the ones that are usually understood by computer interfaces. It is the application of graphic icons and a corresponding pointing device to work with a computer. The history includes five decades of constant changes and advancements that, however, followed a range of core principles. There are unique developments that represented their own systems of windowing using an independent code but there are fundamental elements that are common for any elements inside the WIMP paradigm. The WIMP Paradigm embraces window, icon, pointing device and menu. It is notable that the major success and improvement of the previous interaction systems is reflected in smaller steps, and though there have been significant breakthroughs in use, there are still the same interaction idioms and organizational metaphors that are used until today. Besides, not all users of the world are at the same level of development – some enjoy the latest advancements while for the others neither can access nor afford them. The majority of GUI operating systems use a mouse for controls, keyboard may also carry this function out as well as arrow keys and shortcuts. The touchpads on laptops, and, more importantly, touch screens have also gained enormous popularity due to convenience and comfort.
The History of Computer-Human Interaction through Interfaces
The computer prototype introduced in 1822 was Babbage Analytical Engine that was programmed to recognize and manipulate clutches, cranks, gears and cams. The following important development took place in 1946 when punched cards that dated back to the 18th century (simple tabulating machines) started being used as the major instrument for entering commands and data into computers.
The earliest devices for information exchange were radar displays, and their appearance eventually led to the development of graphic interfaces. In the 1960s, the command line interface was developed – the users were now able to input their commands using teletype keyboards for early computers.
The pointing device used to control the data directly was a light pen created at MIT in 1951. They were sensitive to light and were used with glass vacuum tube cathode monitors. The screens were cathode-ray-tube but the information exchange was text-based only. The multi-panel windowing system was first introduced by the early computer display systems including SketchPad by Ivan Sutherland and the SAGE Project. Although in the modern context direct manipulation interface is ubiquitous, the first time visible objects could be manipulated on the screen was Sutherland’s SketchPad. It applied the light pen that could move and grab objects, change their size and used constraints. A similar product was offered by William Newman, and it was called Reaction Handler. It also enabled the user to manipulate graphic objects. AMBIT/G system from MIT group used icons, dynamic menus where options could be selected with a pointing device, gesture recognition and icons’ selection by a click.
Trackball was adopted for computer use from military systems and air traffic control by MIT scientists in 1964 though it first appeared in 1952. The sensors detected the orientation changes of a small ball which was rotated by the user. The shifts were translated as the cursor movements on the computer screen.
It is notable that the major part of the research dedicated to the design of computer interfaces was connected with analyzing the learning patterns of young children. For example, the hand-eye coordination typical for younger children (unlike voice command or language) was put into the foundation of automated data transformations. Human Intellect Augumentation started with the project by Doug Engelbart in the 1960s (1994). The project developed into an NLS system – a computer with a mouse cursor and several windows to work with hypertext.
The successor work of Engelbart’s creation was the advanced Xerox PARC. Then, in 1973, it developed into Alto personal computer. The latter one was the first representation of desktop along with graphic user interface. The project itself was not commercial but Xerox PARC and other Xerox offices were widely used at PARC for several decades. The influence of Alto computers on personal computers’ development was significant – especially on Macintosh, Apple Lisa, Three Rivers PERQ, and the early workstations of Sun. Eventually, the contributors of Xerox PARC including Larry Tesler, Alan Kay, Dan Ingalls and other prominent researchers. It is notable that Alto computer used icons, menus, and windows that were prototypical for the computer of today. The research has been peeked by Apple and represented in the development of Macintosh in 1984, and in 1995 a corresponding version of Microsoft Windows came as another successful interpretation of the Alto computer (Tate & Leontiou 2011). Alan Kay was the person to introduce overlapping windows. Engelbart worked on text editing, search, replace, automatic word wrap, and commands to cope, move and delete characters, words and paragraphs. The specificities of drawing objects were founded on the inventions of Sutherland.
Douglas Englebart and Bill English also worked together on the creation of a computer mouse. The first sample was developed in Calif at the Stanford Research Institute in 1963. It looked like a wood block with two gear wheels that were located perpendicularly to each other and one button. During their project at Xerox PARC (Palo Alto Research Center) in 1972-1973, the engineers replaced these two roller wheels and put a metal ball that could be rolled to any direction. The advantage of it was that it could not only move along one axis as the previous two wheels in the original mouse.
In 1980s there were several researchers that came up with the invention of optical mouse. In both variants the sensors were required to detect the light and dark, and a special pad for mouse was needed. By now this mouse can function on any solid surface and uses either a laser or an LED as the source of light.
The very first computer with a mouse that was available to the public on sale was Xerox Star 8010. It was also supplied with a window-based and bitmapped graphic user interface using folders and icons (Myers 1998). It was actually the Alto computer with several serious advancements. The workstation systems by Xerox were meant for business uses and the prices were very high too: several thousand dollars per item. Meanwhile, Macintosh of the kid 1980s was the first computer intended for personal use and included improved black-and-white GUI and a mouse for moving cursor on computer screen.
As for graphic user interface, Apple is the company that gets credited for its advancement because the newer operating systems like Mac OS X, Windows XP/Vista/7 or Gnome/KDE on Linux are all variations of the initial Macintosh interface. In spite of this, some of the unique features that have become standards by now are created by minor individual and independent developers, and professional developers. Such enhancements include flexible dialog boxes for Open and Save, tear-off menus and configurable boxes. The innovation was framed by Apple as the flexibility of the operating system was open for minor developers to “polish the interface”. These creators and designers were employed by Apple during 1990s when their software has been added to the initial Mac platform. It resulted into a flexible and mature interface that was copied worldwide and exerted enormous influence on IT development in general. Almost all companies that use graphic user interface on their devices owe their success to Apple – especially cell phone manufacturers. Surely, there were other operating systems that influenced Macintosh developments as well embracing Next, the younger versions of Windows, the Amiga and Sun’s Solaris but after Apple united all the ideas into one single platform, a wider public accessed and experienced it in their work or for personal use. The windows, folders and menus remained simplistic and easy to use while their capabilities instantly grew from the initial Macintosh to System 7.
The collapsible windows called WindowShade technology were added to Macintosh, System 7, and OS X. Smart Scroll that helped users see how much of the document is left below was invented by Marc Moini, and the scroll bars are available in the Amiga and omnipresent in any graphical operating systems. Multifunctional dialogue boxes for saving and opening files were added by Boomerang, and the color icons in dialogue boxes appeared thanks to James Walker. The menus with “What You See Is What You Get” feature were offered by Power On Software allowing fonts to be displayed in their native typeface. The desktop clock was invented by Steven Christensen, and some other advancements form unknown developers are a spinning cursor of progress when the system is busy and a trash bin icon with a corresponding amount of paper trash in it indicating how many files are waiting to get deleted completely.
Kaleidoscope is another significant contribution that extended operating system customization by adding new sounds, schemes, and fonts. The trend of customization became widespread in operating systems such as Windows, Linux, Unix, mp3 players like Audion and WinAmp, and cell phones as well.
The majority of these development relied upon the research dedicated to Human-Computer Interaction. Even the Internet is the direct outcome of Human-Computer Research, and this growth had been triggered by the interface development in particular.
The beginning of 3D is marked with the appearance of 3D CAD system by Timothy Johnson and the location sensing system by Larry Roberts. As for virtual reality, it was originally started by Ivan Sutherland during his studies in Harvard.
The birth of MultiTouch dates back to 1984 when Bob Boie developed the first transparent overlay version of it at Bell Labs. The device included a conductive surface with voltage connected across it and a range of touch sensors that were put over a cathode ray tube screen. The assumption was made that the natural electrical charge held in human body resulted in a local charge build-up when the user touched the screen. The exact position of screen field disturbance could be determined enabling the user to draw and manipulate graphical figures on the surface with his finger. This development eventually led to the development of Microsoft’s “Surface” and iPad, iPhone and iPod Touch family by Apple.
The Natural User Interface (also called NUI) senses the voice commands and body movements of the user instead of reading them with input devices like a touch screen or a keyboard. Surely, these technologies also experienced great changes. In 2009, Microsoft introduced Project Natal which was later called Kinect. It was intended to control the video game system of X-box 360. Wii is another example that was primarily used for fitness activities and racing games.
The latest and most progressive version of computer interface is Brain-Computer Interface. It is a system based on thought control, and the research dedicated to this sphere started in 1970s. There are two types of Brain-Computer Interface: invasive and non-invasive. The first one implies that the sensors are implanted into human brain to detect though impulses. The second one implies the analysis of electromagnetic waves that pass through the skull without any implants required (Buch et al. 2008). In Brain-Computer simulations, the participants enter from the outside and get their bran directly connected to the computer.
Related Technology essays
- Mobile Computing
- Trusted Platform Module (TPM) and its components
- The iPad Mini
- Effects of Technology on the Social of Development of the Youth
- Smartphones and Their Effect in Society
- Computer Forensics Report
- Log Files
- Web Based Payment Technologies
- Christiane Paul
- New IT Technologies: Evaluation and Implementation
Most popular orders
A World on the Edge
BUS330 Week 3 Discussions
Book Review: The Fires of Jubilee: Nat Turner’s Fierce Rebellion
Research Article Critique and Research Proposal in the Public Relations Issues
Morita Akio: Biography
The Five Competitive Forces That Shape Strategy
IRON CURTAIN SPEECH
The 1787 To 1900 Period in the United States
Cultural and Conflict Perspective
Economic Growth and Poverty in West Africa: A Cross-Country Analysis