The very earliest computers were hard coded machines which could only run one program and needed rewiring to run anything else. In 1948, the Manchester Baby became the first stored program computer, a machine which eliminated the distinction between code and data and used the same storage mechanisms for both, allowing ‘rewiring’ simply by changing the contents of storage.
All modern computers follow this model and as the complexity of the programs to be stored has increased, producing them has become an increasingly complex challenge.
Early programming was done by the programmer writing the machine instructions, sometimes called orders. These would be of the form ‘load memory address 100 into register 2’ or ‘add the contents of register 2 to register 3.’ Each instruction was a combination of an operation and one or more operands (things being operated on, such as register numbers, memory addresses, or constant values).
Remembering the numbers corresponding to operations was difficult and when computers started being able to handle text it became common to use mnemonics which corresponded to operations.
As complexity of programs increased, programmers started keeping libraries of algorithms that they used frequently and could insert into their programs where needed. Complex programs were assembled by combining these blocks. This process gradually evolved into the high-level programming languages used today.
Objects as Simple Computers
[Object Oriented Programming] to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I’m not aware of them.
Alan Kay, inventor of the term ‘Object Oriented’
In the ’70s, a number of major developments came out of Xerox’s Palo Alto Research Center (PARC). These included the graphical user interface, ethernet networking and the laser printer. In addition to these was a new way of thinking about programming, known as object oriented design. Rather than viewing programs as a set of subroutines which called each other, as procedural programming encouraged, object oriented programming decomposed a large program into objects. An object is a simple model of a computer, which interacts with other objects via message passing.
Many object oriented languages include the notion of a class. This is a special kind of object which is used to create objects. In class-based languages, an object’s behaviour is defined by its class, which may in turn inherit some of its behaviour from another class. This idea comes from the Simula language, originally designed for simulation. Classes were introduced in Simula to allow general categories of simulated objects to easily share code. These could be refined to represent more specialised types of simulated object. Although object oriented languages inherit a lot of ideas from Simula, it lacked a number of features such as encapsulation that are generally regarded as being requirements for an object oriented language.
In an unstructured program, flow is controlled by using jumps. With procedural programming, flow is controlled via subroutine calls and returns. With object oriented programming control flows with message passing operations.
In Smalltalk, the canonical object oriented language, there are no explicit flow control operations at all. There is one built in type of object, called a BlockClosure, which represents a block of code and responds to a value message, which evaluates it and gets the return value. Conditional expressions are formed by sending an ifTrue: message to an object representing a boolean value with a block as the argument. Instances of the True class will execute the block when they receive the message, while instances of the False class will not.
Further Reading: Smalltalk-80 - The Language and its Implementation
Objective-C and the World Wide Web
Objective-C is a programming language defined by adding an object oriented layer on top of C, using Smalltalk semantics. The Objective-C language began life as the Object Oriented Pre-Compiler. This was a simple preprocessor that took Smalltalk-like constructs and translated them into pure C code. Since C has no native support for dynamic dispatch, the pre-compiler used a separate library to handle dynamic lookup of methods. This evolved into the Objective-C runtime library.
The runtime library is responsible for implementing the aspects of Objective-C that do not map trivially on to C constructs. Methods in Objective-C are translated to C functions, but the static lookup mechanism used for calling C functions is not applicable to the Smalltalk object model and so a dynamic lookup mechanism is implemented in the runtime. The runtime also defines structures to be used for implementing classes which store the metadata needed for introspection on method and instance variable names and types.
Objective-C was first widely released to the public by Stepstone, a company founded by Brad Cox, the language’s designer, and Tom Love. The company sold an Objective-C compiler and a set of libraries. In 1988, Steve Jobs’ second computer company, NeXT, bought the rights to Objective-C from StepStone and became the main distributor of Objective-C products.
Objective-C was used in NeXT’s operating system, NeXTSTEP in a number of places. Device drivers were written in Objective-C by subclassing generic devices and the entire GUI framework was written in the language. The NeXT Interface Builder is generally regarded as being the first Rapid Application Development (RAD) tool. It produced bundles called nibs which contained serialised object graphs. These typically contained the view and controller objects for a window and were loaded and connected to model objects at runtime. The framework made heavy use of the dynamic features of Objective-C. For example, a common pattern was to provide a delegate to view objects. This would implement some of a set of defined methods and the view would query which it did implement at runtime. A similar pattern is used in Java, however since this language lacks the dynamic capabilities of Objective-C the delegate is required to implement all of the methods, even if the implementation does nothing.
Although NeXT’s development tools were popular with those who used them, the high price of the machines ($10,000 for the early models) kept the number of users small. One of the best known was Tim Berners-Lee, working at CERN. Tim used it to write WorldWideWeb, the first web browser. He later claimed that he would not have been able to do so without the ease of programming provided by Objective-C and specifically NeXT’s AppKit framework. Many aspects of AppKit can be seen in the original implementation of the web. The original tags supported by HTML correspond directly to the attributes recognised by the NSAttributedString object used to represent rich text.
Subsequent web browsers were written in more primitive languages, typically C, and it wasn’t until 1996 that Objective-C appeared on the web scene again. In this instance it was as the core language for WebObjects, the first web application development environment, again produced by NeXT. This was used for many of the early eCommerce sites on the emerging web, as well as others such as the BBC News site and Disney’s online presence.
WebObjects included a companion library, the Enterprise Objects Framework. This was released two years prior to WebObjects, but found significant use when developing web applications. It used many of the dynamic features of Objective-C to implement object-relational mappings, allowing persistent storage of objects in a relational database. Something similar is found in most web application frameworks today.
The most lasting impact on the web can be seen from the impact of Objective-C in the development of Java. Patrick Naughton, one of the two main developers of Java, had been offered a job at NeXT prior to beginning work on Java and ‘thought Objective-C was the coolest thing since sliced bread, and…hated C++.’ Java inherits many attributes from Objective-C, including single inheritance, dynamic binding, dynamic loading, classes as objects, formal interfaces (protocols in Objective-C) and primitive (non-object) types. It adds syntax more familiar to C++ users and a more static type system on top of these.
The distinction between a high-level and low-level language is a constantly moving boundary, but the first language to claim the title of a high-level language is generally considered to be FORTRAN.
A FORTRAN program described the algorithm to be executed in a way that was not tied to any specific architecture. One of the the key innovations of FORTRAN was the GO TO statement, invented by Harlan Herrick. This allowed branching to some high-level concept of a label, rather than a machine address.
FORTRAN took many years to develop. The idea was proposed by John W. Backus in 1953 to develop more efficient methods of programming IBM’s 704 mainframe. The first draft of the language specification appeared a year later and the first FORTRAN programming manual was published towards the end of 1956. Readers of this manual had to wait another six months before they could put their skills into practice, as the first compiler was not released until April of the following year.
The Rise of Structured Programming
GOTO Considered Harmful