The post Scott Fahlman and the Origin of the Emoticons :-) appeared first on SciHi Blog.

]]>On September 19, 1982, **Scott Fahlman** posted the first documented emoticons and on the Carnegie Mellon University Bulletin Board System. As SMS and the Internet became widespread in the late 1990s, emoticons became increasingly popular and were commonly used on text messages, internet forums and e-mails. Emoticons have played a significant role in communication through technology.

“I often think there should exist a special typographical sign for a smile — some sort of concave mark, a supine round bracket.”

— Vladimir Nabokov, NYT interview in 1969

Fahlman earned his Bachelor’s and Master’s degrees in 1973 from the Massachusetts Institute of Technology (MIT). In 1977 he also earned his doctorate at MIT with Marvin Minsky (*A System for Representing and Using Real-World Knowledge*).[8] Since 1978 he has been researching at Carnegie Mellon University, where he was appointed professor in 1984. From May 1996 to July 2000, he headed the Pittsburgh Justice System Research Center.

The original message from which these symbols originated was posted on September 19, 1982. The message was recovered by Jeff Baird on September 10, 2002 and is quoted:

19-Sep-82 11:44 Scott E Fahlman

From: Scott E Fahlman

I propose that the following character sequence for joke markers:

Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use

From one of our previous blog posts you might remember that the original smiley has been invented by the designer Harvey Ball [4] in 1963 and also there are evidences, that emoticons have been used way back in the year 1881 by Ambrose Bierce.[5] He was then working at the satirical magazine ‘*Puck*‘ and introduced __/! as a smiling mouth, the exclamation mark had to be appended, if the sentence or phrase written, was meant ironically.

But then, 30 years ago today, Scott Fahlman published the famous smiling (or sad) face, laying on the side, which spread widely in a short period of time. Many variations have then been created and with the development of new ways of communication, the emoticons’ importance grew. It seems like a whole new language has established since then, actually two, because the Asian emoticons’ appearance differs remarkably from the so called “western style”.

Emoticons are an important method for participants in Internet communication to make their emotions clear. In contrast to face-to-face communication, Internet communication takes place without any visible counterpart, whose gestures, facial expressions and voice expressions could be interpreted in order to obtain information about the attitude towards the counterpart, statements about the truthfulness and meaning of the statement as well as the emotional state in addition to the word content. The social role of the speaker also provides clues about the meaning of the language content. For example, an ironic statement in written form often cannot be understood solely by the content of the word. Emoticons help to clarify the meaning context of the statements. Unlike other forms of text-based communication, such as letters, strangers often meet on the Internet. This makes it even more difficult to decipher the context of meaning. The emoticons should help to reduce the number of misunderstandings.

At yovisto academic video search you might be interested to watch a short video interview with Prof. Scott E. Fahlman from Carnegie Mellon University about his now famous ‘invention’.

**References and Further Reading:**

- [1] Punkt, Punkt, Komma, Strich?: Geste, Gestalt und Bedeutung philosophischer Zeichensetzung
*,*Christine Abbt, 2009*(In German Language)* - [2] Emoticon History at Time
- [3] What’s the Difference Between Emoji and Emoticons? at Britannica Online
- [4] Harvey Ball and his famous Icon, SciHi Blog
- [5] Nothing really mattered to Ambrose ‘Bitter’ Bierce, SciHi Blog
- [6] Emoticons at Wikidata
- [7] Scott Fahlman at Wikidata
- [8] Marvin Minsky and Artificial Neural Networks, SciHi Blog
- [9] Scott E. Fahlman at Mathematics Genealogy Project

The post Scott Fahlman and the Origin of the Emoticons :-) appeared first on SciHi Blog.

]]>The post Linux at the Core of the Open Source Revolution appeared first on SciHi Blog.

]]>On September 17, 1991, the Finnish student of computer science **Linus Torvalds**, uploaded **Linux kernel** version 0.01 to the ftp server ftp.funet.fi. This might be considered as the date of birth of the famous free operating system Linux, although Torvalds announced the new OS a few weeks earlier on usenet already. Nevertheless, Linux has become one of the most popular operating systems today, and this of course with a god reason….

I think, fundamentally, open source does tend to be more stable software. It’s the right way to do things. – Linus Torvalds

Linus Torvalds was born in 1969 in Helsinki, Finnland. His interest in computers started early with the Commodore VIC-20, his grandfather purchased for mathematical calculations when Torvalds was around 11 years old. Soon he began to create own small programs and games on his grandfathers machine. Around 1987, with money he had received through scholarships, vacation jobs and a loan from his father, Torvalds bought a new home computer, a Sinclair QL with 128 kilobytes of memory and a 68008 processor, whose then unique ability of pre-emptive multitasking fascinated him. Using the Sinclair QL, Torvalds is known to have programmed a clone of the famous PacMan game. Linus Torvalds graduated from high school in 1988 and began his studies at the University of Helsinki. As a student of computer science, he purchased an IBM PC before receiving his copy of the MINIX operating system, which in turn enabled him to begin work on Linux.

MINIX is an inexpensive minimal Unix-like operating system, designed for education in computer science, written by Andrew S. Tanenbaum. During his computer science studies Torvalds became curious about operating systems in general. But, he soon got frustrated by the licensing of MINIX, which limited it to educational use only. Therefore, he decided to begin to work on his own operating system which eventually became the Linux kernel. As Torvalds wrote in his book “Just for Fun“, he eventually realized that he had written an operating system kernel.

Torvalds started the development of the Linux kernel on MINIX, and applications written for MINIX were also used on Linux. Later on, when Linux matured, and further development took place on native Linux systems replacing all MINIX components by GNU applications, because it was advantageous to use the freely available code from the GNU project with the fledgling operating system. Program code licensed under the GNU General Public License (GNU GPL) can be reused in other projects as long as they also are released under the same or a compatible license. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with Linux to make a fully functional and free operating system.

“…the Linux philosophy is ‘laugh in the face of danger’. Oops. Wrong one. ‘Do it yourself’. That’s it.”

— Torvalds, Linus (1996-10-16). Post to linux.dev.kernel newsgroup

Today, Linux systems are used in every domain, from embedded systems to supercomputers. The Use of Linux distributions in home and enterprise desktops has been constantly growing and Linux has also gained popularity with various local and national governments, such as e.g., Brazil, Russia, Spain, or in India or China, because of its independency from a special supplier.

At yovisto academic video academic video search you can listen to Linus Torvalds himself sharing his thoughts on git, the source control management system he created two years ago.

**References and further reading:**

- [1] Oliver Diedrich: The History of Linux
- [2] Linus Torvalds: Just for Fun or The Story of an Accidental Revolutionary, Harper Business, 4th ed. (2002)
- [3] Linus Torvalds at Wikidata
- [4] The BASIC Programming Language, SciHi Blog
- [5] IBM and the Success Story of the Personal Computer, SciHi Blog
- [6] Timeline of operating systems, via Wikidata

The post Linux at the Core of the Open Source Revolution appeared first on SciHi Blog.

]]>The post IBM and the Success Story of the Personal Computer appeared first on SciHi Blog.

]]>On August 12, 1981, IBM presented the **IBM 5150, the very first IBM personal computer**, commonly known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform. It was created by a team of engineers and designers under the direction of Philip Donald Estridge of the IBM Entry Systems Division in Boca Raton, Florida. Actually, it were his decisions that dramatically changed the computer industry, resulting in a vast increase in the number of personal computers sold and bought, thus creating an entire industry of hardware manufacturers of IBM PCs.

However, IBM did not coin the term “personal computer”. It was already in use before 1981. Actually, it was used as early as 1972 to characterize Xerox PARC‘s Alto computer, the revolutionary predecessor of all modern graphical user interface and mouse input driven computers that we know today. But, because of the success of the IBM Personal Computer, the term PC came to mean more specifically a microcomputer compatible with IBM’s PC products.

IBM’s efforts to develop the IBM PC began when Don Estridge took control of the IBM Entry Level Systems in 1980 with the goal of developing a low-cost personal computer to compete against increasingly popular offerings from the likes of Apple Computer, Commodore International, and other perceived IBM competitors. To create a cost-effective alternative to those companies products, Estridge realized that it would be necessary to rely on third-party hardware and software. This was a marked departure from previous IBM strategy, which centered around in-house vertical development of complicated mainframe systems and their requisite access terminals. As another revolutionary and daring act Estridge also published the specifications of the IBM PC, allowing a booming third-party aftermarket hardware business to take advantage of the machine’s expansion card slots.

At introduction in 1981, an IBM PC with 64kB of RAM and a single 5.25-inch floppy drive and monitor was sold for US $3,005 ($ 7,682 by today), while the cheapest configuration (1,565 US$) that had no floppy drives, only 16kB RAM, and no monitor (under the expectation that users would connect their existing TV sets and cassette recorders) proved too unattractive and low-spec, even for its time. The commercial success of the IBM PC led other companies to develop IBM Compatibles, which in turn led to branding like diskettes being advertised as “IBM format”. An IBM PC clone could be built with off-the-shelf parts, but the BIOS required some reverse-engineering. Thus, the IBM PC became the industry standard.

Today, more than 30 years after production start, the IBM model 5150 PC has become a collectable among vintage computer collectors, due to the system being the first true PC as we know them today. Depending on cosmetic and operational condition an IBM 5150 can obtain a prize of more than US$ 4,000. Overall, the IBM model 5150 has proven to be truly reliable. Despite their age of 30 years or more, some still function as they did when new.

At yovisto academic video search you might watch a documentary about the Intel 80386 processor from 1987, one of the successors of the Intel 8088 processor that powered the original IBM 5150.

**References and Further Reading:**

- [1] IBM introduces its Personal Computer (PC) at Computer History
- [2] The IBM PC’s debut at the IBM Webpage
- [3] IBM at Wikidata
- [4] The IBM System/360 and the Use of Microcode, SciHi Blog
- [5] Behold the First Commercial Computer (in the US) – the UNIVAC I, SciHi Blog
- [6] Timeline of computers, via Wikidata

The post IBM and the Success Story of the Personal Computer appeared first on SciHi Blog.

]]>The post It’s Computable – thanks to Alonzo Church appeared first on SciHi Blog.

]]>You know, the fact that you can read your email on a cell phone as well as on your desktop computer or almost any other computer connected to the internet, in principle is possible thanks to mathematician **Alonzo Church**, who gave the proof (together with Alan Turing) that everything that is computable on the simple model of a Turing Machine, also is computable with any other ‘computer model’.

Church studied at Princeton University and graduated with a doctorate. After stays at the University of Chicago, the Georg August University of Göttingen and the University of Amsterdam he became Princeton Professor of Mathematics in 1929. He became known to his mathematical-logical colleagues for his development of the Lambda Calculus, to which he wrote in a report published in 1936 (Church-Rosser’s theorem) in which he demonstrated that there are undecidable problems (i.e. the answer to a question cannot be calculated mathematically).

In mathematics and computer science, the ‘Entscheidungsproblem‘ is one of the challenges posed by mathematician David Hilbert in 1928. The Entscheidungsproblem asks for an algorithm that takes as input a statement of a first-order logic and answers “Yes” or “No” according to whether the statement is universally valid, i.e., valid in every structure satisfying the underlying axioms.

Actually, the origin of the Entscheidungsproblem goes back to Gottfried Wilhelm Leibniz, who in the 17th century, after having constructed a successful mechanical calculating machine, dreamt of building a machine that could manipulate symbols in order to determine the truth values of mathematical statements.[10] Leibniz realized that the first step would have to be a clean formal language, and much of his subsequent work was directed towards that goal.

By the completeness theorem of first-order logic, a statement is universally valid if and only if it can be deduced from the axioms, so the Entscheidungsproblem can also be viewed as asking for an algorithm to decide whether a given statement is provable from the axioms using the rules of logic. In 1936 and 1937, Alonzo Church and his student Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible. To achieve this, Alonzo Church applied the concept of “effective calculability” based on his Lambda calculus, while Alan Turing based his proof on his concept of Turing machines. Church and then found that the Lambda calculus and the Turing machine were equal in expressiveness, and were able to give some more equivalent mechanisms for calculating functions. A thesis for the intuitive calculability concept derived from this is known as the Church-Turing thesis. The lambda calculus also influenced the design of the LISP programming language and functional programming languages in general.

It was recognized immediately by Turing that these two concepts are equivalent models of computation. Both authors were heavily influenced by Kurt Gödel‘s earlier work on his incompleteness theorem, especially by the method of assigning numbers (also-called Gödel numbering) to logical formulas in order to reduce logic to arithmetic. Church’s Theorem, showing the undecidability of first order logic, appeared in A note on the Entscheidungsproblem published in the first issue of the Journal of Symbolic Logic. This, of course, is in contrast with the propositional calculus which has a decision procedure based on truth tables. Church’s Theorem extends the incompleteness proof given of Gödel in 1931.[11]

Church was a founder of the *Journal of Symbolic Logic* in 1936 and was an editor of the reviews section from its beginning until 1979. In 1960 Church was elected to the American Academy of Arts and Sciences, in 1978 to the National Academy of Sciences. In 1962 he gave a plenary lecture at the International Congress of Mathematicians in Stockholm (*Logic, Arithmetic and Automata*).

Alonzo Church died on August 11, 1995, aged 92.

At yovisto academic video search you can learn more about Alonzo Church in the lecture ‘At odds with the Zeitgeist: Kurt Gödel’ by Prof. John W. Dawson from the Institute of Advanced Studies in Princeton.

**References and further Reading:**

- [1] Church’s Theorem in Wikipedia
- [2] Alonzo Church, “A note on the Entscheidungsproblem“, Journal of Symbolic Logic, 1 (1936), pp 40–41.
- [3] Alan Turing, “On computable numbers, with an application to the Entscheidungsproblem“, Proceedings of the London Mathematical Society, Series 2, 42 (1937), pp 230–265.
- [4] O’Connor, John J.; Robertson, Edmund F., “Alonzo Church”, MacTutor History of Mathematics archive, University of St Andrews.
- [5] Alonzo Church at the Mathematics Genealogy Project
- [6] Alonzo Church at zbMATH
- [7] Alonzo Church at Wikidata
- [8] Churchill’s Best Horse in the Barn – Alan Turing, Codebreaker and AI Pioneer, SciHi Blog
- [9] David Hilbert’s 23 Problems, SciHi Blog
- [10] Let Us Calculate – the Last Universal Academic Gottfried Wilhelm Leibniz, SciHi Blog
- [11] Kurt Gödel Shaking the Very Foundations of Mathematics, SciHi Blog

The post It’s Computable – thanks to Alonzo Church appeared first on SciHi Blog.

]]>The post Let Us Calculate – the Last Universal Academic Gottfried Wilhelm Leibniz appeared first on SciHi Blog.

]]>On July 1, 1646, one of the last universally interdisciplinary academics, active in the fields of mathematics, physics, history, politics, philosophy, and librarianship was born. **Gottfried Wilhelm Leibniz** counts as one of the most influential scientists of the late 17th and early 18th century and impersonates a meaningful representative of the Age of Enlightenment. Moreover, he is also the namesake of the association to which the institute I am working for is a member of, the Leibniz Association (Leibniz Gemeinschaft).

Leibniz made up his interests concerning philosophy and law studies in his early years, following his father’s footsteps. He even decided to acquire Latin auto-didactically at the age of eight, which is impossible to imagine for today’s Latin students, who experience this language more as a constant torture. But Leibniz sticked to it and was therefore able to attend the famous Thomasschule in Leipzig. His later years at the University of Leipzig and the University of Jena were filled with studies in philosophy, law, mathematics, physics, and astronomy. Because of his widely spread field of education he is now titled as the ‘last universal academic’. He was able to establish a great reputation, working for archbishop Johann Phillip von Schönborn in the 1670‘s. During his time in Mainz he published his first work of great reception ‘Nova methodus discendae docendaeque jurisprudentiae’, a new method to teach and study jurisprudence. He also became a member of the British Royal Society due to his achievement of creating a calculating machine with a stepped reckoner. Another contribution to the field of mathematics was his (and Newton’s) development of infinitesimal calculus, revolutionary then and a basis of many calculations in mathematical, physical, stochastic and economical problems today. In philosophy, Leibniz got famous with the phrase of the ‘best of all possible worlds’. It pictures the correlation between the good and the evil, meaning that the world has a huge potential of development and that even God cannot realize the good things on earth without a certain amount of the evil.

Leibniz’s achievements are far too many to be mentioned all in one small blog post [1]. Thus we will focus here only on a small episode. Also for computer scientists, Leibniz anticipated the use of formal logic for automated reasoning and decision making. Besides inventing the binary system, which is the basis of nowadays computers, Leibniz argued that if we would be able to find a formal (logic) language to express problems instead of our ambiguous natural language, we should be able to solve arguments simply performing a calculation. *Let us calculate!* (in Latin: *Calculemus*!) he requested, to solve every argument or dispute. He believed that much of human reasoning could be reduced to calculations of a sort, and that such calculations could resolve many differences of opinion:

“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right”

— Gottfried Wilhelm Leibniz in a letter to Philip Spener, The Art of Discovery 1685, Wiener 51

Leibniz’s *calculus ratiocinator*, which resembles symbolic logic, can be viewed as a way of making such calculations feasible. Leibniz wrote memoranda that can now be read as groping attempts to get symbolic logic – and thus his calculus – off the ground. These writings remained unpublished until the appearance of a selection edited by C.I. Gerhardt (1859). L. Couturat published a selection in 1901; by this time the main developments of modern logic had been created by Charles Sanders Peirce and by Gottlob Frege.[7]

Another highlight in Leibniz‘ career probably was becoming the first president of the Prussian Academy of Sciences in Berlin. His achievements and contributions to the world’s development are numerous and therefore he was honored several times during his lifetime and has not been forgotten today. Since a big part of his scientific work is documented in letters, the collection of these papers have been inscribed on UNESCO‘s Memory of the World Register in 2007.

At yovisto academic video search, you may learn about the Highlights of Calculus, a lecture by Professor Strang, who shows how calculus applies to ordinary life situations, such as: driving a car or climbing a mountain.

**References and Further Reading:**

- [1] Leibniz and the Integral Calculus, SciHi Blog
- [2] O’Connor, John J.; Robertson, Edmund F., “Gottfried Wilhelm Leibniz“, MacTutor History of Mathematics archive, University of St Andrews.
- [3] Gottfried Wilhelm Leibniz at Wikidata
- [4] Timeline for Gottfried Wilhelm Leibniz, via Wikidata
- [5] Gottfried Wilhelm Leibniz at zbMATH
- [6] Gottfried Wilhelm Leibniz at Mathematics Genealogy Project
- [7] Gottlob Frege and the Begriffsschrift, SciHi Blog
- [8] Charles Sanders Peirce and Semiotics, SciHi Blog

The post Let Us Calculate – the Last Universal Academic Gottfried Wilhelm Leibniz appeared first on SciHi Blog.

]]>The post Churchill’s Best Horse in the Barn – Alan Turing, Codebreaker and AI Pioneer appeared first on SciHi Blog.

]]>On June 23, 1912, English computer scientist, mathematician, logician, and cryptanalyst,**Alan Mathison Turing** was born. Outside the world of computer science or mathematics the name of probably the most influential figure and in some sense the father of all computing technology Alan Turing is hardly known. But it was him, who laid the foundations of the theory of computing. Already in the 1930s, when no digital electronic computer had ever been built, he has shown the limits of computation and thus, anticipated all that was to come in the so-called digital revolution. He also laid the foundations to artificial intelligence. And even more, he was a hero of World War II. Without him, maybe it would have been impossible to decode Nazi Germany’s encrypted radio messages. This makes him to one of the most distinguished figures of the war and we have to thank him for the victory of the Allied forces. And how was he thanked by his contemporaries… [1]

“A man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine.”

–“Intelligent Machinery: A Report by A. M. Turing,” (Summer 1948)

Alan Turing’s father was member of the Indian Civil Service, who returned to England to raise his son in his home country. Young Alan learned the ABC by himself and with the age of 16 he was already reading the writings of Albert Einstein.[2] Not only did he grasp Einstein’s writings, but it is possible that he managed to deduce Einstein’s questioning of Newton’s laws of motion from a text in which this was never made explicit. He studied mathematics at King’s College in Cambridge, where his fellow students considered him to be eccentric. His talking often was stammering with brusquely silence. However, he was awarded first-class honours in mathematics and in 1935, at the age of 22, he was elected a fellow of King’s college on the strength of a dissertation in which he proved the central limit theorem.

Most of all, he loved sports. In 1937 he published his famous paper ‘”*On Computable Numbers, with an Application to the Entscheidungsproblem*“‘, in which he introduced the ‘Turing Machine‘, a simple model of a computer that is able to solve all kind of problems that might be expressed as an algorithm. The Entscheidungsproblem (decision problem) was originally posed by German mathematician David Hilbert in 1928.[3] Turing proved that his “universal computing machine” would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the decision problem by first showing that the halting problem for Turing machines is undecidable: It is not possible to decide algorithmically whether a Turing machine will ever halt. With his machine he laid the foundation for theoretical computer science and was able to demonstrate even the limits of computation at a time, when no computer existed at all.

In summer 1938 he studied cryptology to become the leader of a small group of mathematicians at Bletchley Park, an extensive cryptoanalytic facility that had the task to decipher the codes of the German navy. The Germans were using the famous Enigma for encrypting their messages. Turing had something of a reputation for eccentricity at Bletchley Park. He was known to his colleagues as ‘Prof’ and sometimes came to work with his pyjamas under his jacket or he was wearing a service gas mask at the pollen season in springtime for cycling. But it was his cryptoanalytic effort that made the decryption of the German radio messages possible and thus it also was him to be responsible for the allied victory of the war. Within weeks of arriving at Bletchley Park, Turing had specified an electromechanical machine that could help break Enigma more effectively than the Polish *bomba kryptologiczna*, from which its name was derived. The bombe, with an enhancement suggested by mathematician Gordon Welchman, became one of the primary tools, and the major automated one, used to attack Enigma-enciphered messages. While working at Bletchley, Turing, who was a talented long-distance runner, occasionally ran the 40 miles (64 km) to London when he was needed for high-level meetings, and he was capable of world-class marathon standards.

Immediately after the war Churchill closed down Bletchley Park. In recognition of Turing’s service he was promoted Officer of the Order of the British Empire. Actually, he stored the medal in his toolbox, considering it to be merely a piece of metal. In 1950 Turing addressed the problem of artificial intelligence, and proposed an experiment which became known as the Turing Test, an attempt to define a standard for a machine to be called “intelligent“. The idea was that a computer could be said to “think” if a human interrogator could not tell it apart, through conversation, from a human being.

“Can machines think?”… The new form of the problem can be described in terms of a game which we call the ‘imitation game.”

— Alan Turing, Computing Machinery and Intelligence (1950)

But in 1952 the tragedy started. During the investigation of a burglary in Turing’s home, he acknowledged a sexual relationship with an accomplice of the suspected burglar. Homosexual acts were illegal in the UK at that time, and so Turing was charged with gross indecency under Section 11 of the Criminal Law Amendment Act 1885. The judge offered him the choice between imprisonment or probation conditional on his agreement to undergo hormonal treatment designed to reduce libido. He accepted chemical castration via injections of stilboestrol, a synthetic oestrogen hormone.

On June 8th, 1954 Turing was found dead at his house in Wilmslow near Manchester. Next to him on his bedside table laid a half eaten apple with the soft scent of bitter almond. It is generaly believed that Turing had committed suicide with cyanide (*actually BBC is reporting about doubts concerning the results of the former investigation *[1]). In the times while working at Bletchley Park, he was sometimes reciting whimsical verses such as the one from the Disney movie “Snow White and the Seven Dwarves”:

“Dip the apple in the brew, let the sleeping death seep through”

In August 2009, John Graham-Cumming started a petition urging the British government to apologise for Turing’s prosecution as a homosexual. The petition received more than 30,000 signatures. The Prime Minister, Gordon Brown, acknowledged the petition, releasing a statement on 10 September 2009 apologising and describing the treatment of Turing as “appalling”. Since 1966, the Turing Award has been given annually by the Association for Computing Machinery for technical or theoretical contributions to the computing community. It is widely considered to be the computing world’s highest honour, equivalent to the Nobel Prize.

At yovisto academic video search you might learn more about the life and works of Alan Turing with the fabulous lecture of Prof. Jack Copeland from MIT on ‘Alan Turing: Codebreaker and AI pioneer’.

**References and Further Reading:**

- [1]
*Alan Turing: Inquest’s suicide verdict ‘not supportable’*, by Roland Pease BBC Radio Science Unit, June 23, 2011 - [2] Albert Einstein revolutionized Physics, SciHi Blog
- [3] David Hilbert’s 23 Problems, SciHi Blog
- [4] Turing, A. M. (1937). “On Computable Numbers, with an Application to the Entscheidungsproblem“. Proceedings of the London Mathematical Society. 2. 42. pp. 230–65. doi:10.1112/plms/s2-42.1.230.
- [5] Turing, Alan (1950). “Computing Machinery and Intelligence” . Mind 49: 433–460.
- [6] Alan Turing at zbMATH
- [7] O’Connor, John J.; Robertson, Edmund F., “Alan Mathison Turing“, MacTutor History of Mathematics archive, University of St Andrews.
- [8] Alan Turing at Mathematics Genealogy Project
- [9] Alan Turing at Wikidata
- [10] Timeline for Alan Turing, via Wikidata

The post Churchill’s Best Horse in the Barn – Alan Turing, Codebreaker and AI Pioneer appeared first on SciHi Blog.

]]>The post Ted Nelson and the Xanadu Hypertext System appeared first on SciHi Blog.

]]>On June 17, 1937, American pioneer of information technology, philosopher, and sociologist **Theodore Holm “Ted” Nelson** was born. Nelson coined the terms *hypertext* and *hypermedia* in 1963 and published them in 1965. Nelson founded Project Xanadu in 1960, with the goal of creating a computer network with a simple user interface, a predecessor of modern World Wide Web.

“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”

— Ted Nelson, 1999 [10]

Ted Nelson attended Swarthmore College where he received his bachelors degree in 1959. After one year of graduate study in sociology, Nelson continued his studies in philosophy at the University of Harvard in 1960. The son of director Ralph Nelson and actress Celeste Holm, Ted Nelson was photographer and filmmaker at John C. Lilly’s Communication Research Institute in Miami, Florida next to his studies. He was further an instructor in sociology at Vassar College during the 1960s.

During his time at Harvard, Nelson already envisioned a computer-based writing system that would provide a lasting repository for the world’s knowledge, and also permit greater flexibility of drawing connections between ideas, which became known as Project Xanadu. The project was already founded in 1960. Nelson proposed a machine-language program able to store and display documents as well as edit them. Nelson intended to facilitate nonsequential writing. That means the reader would choose their own path through an electronic document. His idea was published in a paper in 1965 in which he called his idea ‘zippered lists’. These zippered lists would allow compound documents to be formed from pieces of other documents, a concept named transclusion. In 1967, the project was named ‘Xanadu’ in honor of the poem “Kubla Khan” by Samuel Taylor Coleridge.

Nelson supported the project through administrative, academic and research positions as well as consultancies. He met with Douglas Engelbart (whom he later befriended) and consulted Brown University on the Nelson-inspired Hypertext Editing System. Nelson further worked with CBS Laboratories, Bell Labs, the University of Illinois at Chicago and Swarthmore College. During the early 1980s, Nelson was editor of Creative Computing and joined San Antonio, Texas-based Datapoint as chief designer.

When Nelson first predicted many of the features of today’s hypertext systems, the impact was rather little. While many researchers were interested in his ideas, he lacked the technical knowledge to demonstrate how they could be implemented. In the end, Project Xanadu was never really able to take off. Nelson later stated that some aspects of his vision are being fulfilled by Tim Berners-Lee‘s invention of the World Wide Web, but he dislikes the World Wide Web, XML and all embedded markup. According to internet activist Jaron Lanier, the main differences of Ted Nelson’s approach and the Web are:

“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy”

— Jaron Lanier on the differences of Ted Nelsons Xanadu and the Web

During his later career, Ted Nelson worked on a new information structure called ZigZag and he developed XanaduSpace, a system for the exploration of connected parallel documents.

*Ian Ritchie: The day I turned down Tim Berners-Lee:*

**References and Further Reading:**

- [1] Project Xanadu Webpage
- [2] Official Ted Nelson Website
- [3] Ted Nelson at Wikidata
- [4] Project Xanadu at Wikidata
- [5] Doug Engelbart and the Computer Mouse, SciHi Blog
- [6] The Publication of the First Web Page, SciHi Blog
- [7] How the ARPANET became the Internet, SciHi Blog
- [8] Robert Kahn and the Internet Protocol, SciHi Blog
- [9] The Birth of the Internet, SciHi Blog
- [10] Ted Nelson (1999). “Ted Nelson’s Computer Paradigm Expressed as One-Liners”. Retrieved July 3, 2011.

The post Ted Nelson and the Xanadu Hypertext System appeared first on SciHi Blog.

]]>The post Ivan Sutherland – Well, I Didn’t Know it was Hard appeared first on SciHi Blog.

]]>

On May 16, 1938, American computer scientist and internet pioneer **Ivan Sutherland** was born. Sutherland has received the Turing Award from the Association for Computing Machinery in 1988 for his invention of Sketchpad, an early predecessor to the sort of graphical user interface that has become ubiquitous in personal computers today. Sketchpad could accept constraints and specified relationships among segments and arcs, including the diameter of arcs. It could draw both horizontal and vertical lines and combine them into figures and shapes. Figures could be copied, moved, rotated, or resized, retaining their basic properties. Sketchpad also had the first window-drawing program and clipping algorithm, which allowed zooming.

When asked, “How could you possibly have done the first interactive graphics program, the first non-procedural programming language, the first object oriented software system, all in one year?” Ivan replied: “Well, I didn’t know it was hard.” (Alan Kay, Doing with Images Makes Symbols, 1987)

Ivan Sutherland was born in Hastings, Nebraska, United States. His favorite subject in high school was geometry. His first computer processing experience was with a computer called SIMON, a relay-based computer with six words of two-bit memory, which was lent to the Sutherland household in 1950 by its designer, Edmund Berkeley, one of the founders of the ACM. Its 12 bits of memory permitted SIMON to add up to 15. Sutherland’s first significant program allowed SIMON to divide.[7]

He earned his Bachelor’s degree in electrical engineering from the Carnegie Institute of Technology (now Carnegie Mellon University) in 1959, his master’s degreefrom the California Institute of Technology (Caltech) in 1960 , and his Ph.D. from the Massachusetts Institute of Technology (MIT) in EECS in 1963 with his dissertation, “*Sketchpad: A Man-Machine Graphical Communication System,*” under the direction of the information theory pioneer Claude Shannon [2]. Among others on his thesis committee were Marvin Minsky and Steven Coons.[3] Sketchpad was an innovative program that influenced alternative forms of interaction with computers. Sketchpad could accept constraints and specified relationships among segments and arcs, including the diameter of arcs. It could draw both horizontal and vertical lines and combine them into figures and shapes. Figures could be copied, moved, rotated, or resized, retaining their basic properties. Sketchpad also had the first window-drawing program and clipping algorithm, which allowed zooming. Sketchpad ran on the Lincoln TX-2 computer and influenced Douglas Engelbart‘s oN-Line System.[4] Sketchpad, in turn, was influenced by the conceptual Memex as envisioned by Vannevar Bush in his influential paper “*As We May Think*“.[6]

“A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland.”

— Ivan Sutherland, (1965). “The Ultimate Display”. Proceedings of IFIP Congress. pp. 506–508.

After leaving MIT, Sutherland was commissioned as a first lieutenant in the U.S. Army and served as an electrical engineer in the National Security Agency (1963) and then as a researcher at the Defense Advanced Research Projects Agency (1964), where he initiated projects in time-sharing systems and artificial intelligence replacing J. C. R. Licklider as director of DARPA Information Processing Techniques Office (IPTO).[5] From 1965 to 1968, Sutherland was an Associate Professor of Electrical Engineering at Harvard University. Work with student Danny Cohen in 1967 led to the development of the Cohen–Sutherland computer graphics line clipping algorithm. In 1968, with the help of his student Bob Sproull, he created the first virtual reality and augmented reality head-mounted display system, named *The Sword of Damocles*.

In 1968 he co-founded Evans and Sutherland with his friend and colleague David C. Evans. The company has done pioneering work in the field of real-time hardware, accelerated 3D computer graphics, and printer languages. From 1968 to 1974, Sutherland was a professor at the University of Utah. Among his students there were Alan Kay, inventor of the Smalltalk language, Henri Gouraud, who devised the Gouraud shading technique, Frank Crow, who went on to develop antialiasing methods, Edwin Catmull, computer graphics scientist, co-founder of Pixar and now President of Walt Disney and Pixar Animation Studios, and Jim Clark, who designed a virtual reality system and went on to found Silicon Graphics, Netscape, and WebMD.

From 1974 to 1978 he was the Fletcher Jones Professor of Computer Science at California Institute of Technology, where he was the founding head of that school’s Computer Science Department. One area of emphasis was teaching engineers how to design integrated circuits. In 1980 he founded a consulting firm, Sutherland, Sproull and Associates, and served as its Vice President and Technical Director. It was purchased by Sun Microsystems in 1990 to form the seed of its research division, Sun Labs. Sutherland became a Fellow and Vice President at Sun Microsystems.[7] In addition to the Turing Award, Sutherland received the first U.S. National Academy of Engineering Zworykin Award (1972) and a Smithsonian Computer World Award (1996). He was elected to the U.S. National Academy of Engineering (1972) and the U.S. National Academy of Sciences (1978).[1]

At yovisto academic video search, you might watch Ivan Sutherland together with his brother Bert reminiscing about their collective 100 plus years with computers and electronics in an interview from 2004.

**References and Further Reading:**

- [1] Ivan Sutherland, American electrical engineer and computer scientist, at Britannica online
- [2] Claude Shannon – the Father of Information Theory, SciHi Blog
- [3] Marvin Minsky and Artificial Neural Networks, SciHi Blog
- [4] Doug Engelbart and the Computer Mouse, SciHi Blog
- [5] J.C.R. Licklider and Interactive Computing, SciHi Blog
- [6] Vannevar Bush and his Vision of the Memex Memory Extender, SciHi Blog
- [7] Burton, Robert (2012). “Ivan Sutherland“. A.M. Turing Awards.
- [8] Ivan Sutherland at Wikidata
- [9] Timeline of Turing Award winners, via Wikidata

The post Ivan Sutherland – Well, I Didn’t Know it was Hard appeared first on SciHi Blog.

]]>The post Do You Speak Polish… Or Maybe Reverse Polish? appeared first on SciHi Blog.

]]>I guess almost nobody except a few mathematicians and computer scientists have ever heard of the Australian computer scientist **Charles Leonard Hamblin**, who passed away on May 14, 1985. And also most of my fellow computer scientists might not have heard of him. But, one of his major contributions to computer science was the introduction of the so-called **Reverse Polish Notation**. Does that ring a bell?

Interrupted by the Second World War and radar service in the Australian Air Force, Charles Leonard Hamblin’s studies included mathematics, physics, and philosophy at the University of Melbourne, and he obtained a doctorate in 1957 at the London School of Economics. From 1955, he was lecturer at N.S.W. University of Technology, and later professor of philosophy at the same place, until his death in 1985, during which time the organization had been renamed The University of New South Wales.

In the second half of the 1950s, Hamblin worked with the third computer available in Australia, a DEUCE computer manufactured by the English Electric Company. For the DEUCE, he designed one of the first programming languages, later called GEORGE, which was based on Reverse Polish Notation. His associated compiler (language translator) translated the programs formulated in GEORGE into the machine language of the computer, in 1957. Hamblin’s work is considered to be the first to use Reverse Polish Notation, and this is why he is called an inventor of this representation method. Regardless of whether Hamblin independently invented the notation and its usage, he showed the merit, service, and advantage of the Reverse Polish way of writing programs for the processing on programmable computers and algorithms to make it happen. The second direct result of his work with the development of compilers was the concept of the push-pop stack (previously invented by Alan M. Turing for the ACE in 1945), which Hamblin developed independently of Friedrich Ludwig Bauer and Klaus Samelson, and for which in 1957 he was granted a patent for the use of a push-pop stack for the translation by programming languages

Hamblin became aware of the problem of computing mathematical formulae containing brackets results in memory overhead, which was rather critical at these times, because memory was rather small and expensive. One solution to the problem has already been prepared by the famous Polish mathematician Jan Lukasiewicz’s, inventor of the original Polish notation, which enables a writer of mathematical notation to instruct a reader the order in which to execute the operations (e.g. addition, multiplication, etc) without using brackets. Polish notation achieves this by having an operator (+, *, etc) precede the operands to which it applies, e.g., +ab, instead of the usual, a+b. Hamblin, with his training in formal logic, knew of Lukasiewicz’s work. Hamblin improved this principle to save additional storage by putting the operator behind the operands and thus, enabling the computer to make use of a storage, which did not require an address.

In the 1960s Charles Leonard Hamblin began to turn more and more to philosophical questions. In addition to an influential introductory book on formal logic, Fallacy’s work, which is still regarded as a standard work and in print today, is dedicated to the treatment of misconceptions by traditional logic and with which he brought formal dialectics to life. Hamblin later contributed to the development of modern temporal logic. In 1972 he independently rediscovered a form of duration calculus (interval logic).

This might sound rather weird to you, but 30 years ago using one of those sophisticated HP calculators that forced you to use and — more important -think RPN, made you the undisputed number one among all the other geeks.

You might learn more about Reverse Polish Notation at yovisto academic video search by watching ‘The Joys of RPN‘

**References and Further Reading:**

- [1] Churchill’s Best Horse in the Barn – Alan Turing, Codebreaker and AI Pioneer, SciHi Blog
- [2] “Everything you’ve always wanted to know about RPN but were afraid to pursue – Comprehensive manual for scientific calculators – Corvus 500 – APF Mark 55 – OMRON 12-SR and others” (PDF). T. K. Enterprises. 1976.
- [3] Parsing/RPN calculator algorithm at rosettacode.org
- [4] Online implementation to translate standard notation to RPN
- [5] Charles Leonard Hamblin at Wikidata

The post Do You Speak Polish… Or Maybe Reverse Polish? appeared first on SciHi Blog.

]]>The post Claude Shannon – the Father of Information Theory appeared first on SciHi Blog.

]]>On April 30, 1916, American mathematician, electrical engineer, and cryptographer **Claude Elwood Shannon** was born, the “father of information theory“, whose groundbreaking work ushered in the Digital Revolution. Of course Shannon is famous for having founded information theory with one landmark paper published in 1948. But he is also credited with founding both digital computer and digital circuit design theory in 1937, when, as a 21-year-old master’s student at MIT, he wrote a thesis demonstrating that electrical application of Boolean algebra could construct and resolve any logical, numerical relationship. Believe it or not, it has been claimed that this was the most important master’s thesis of all time. Shannon contributed to the field of cryptanalysis during World War II and afterwards, including basic work on code breaking.

Claude Shannon was born in a hospital in Petoskey, Michigan, and grew up in nearby Gaylord, the home of his parents. His father was a judge, his mother a language teacher of German origin. During his high school years he worked as a messenger for the Western Union. He followed his sister Catherine to the University of Michigan in 1932. In 1936 he moved to MIT with a degree in mathematics and electrical engineering. In his master thesis (1937), *A Symbolic Analysis of Relay and Switching Circuits*, he applied Boolean algebra to construct digital circuits. The work arose from the analysis of the relay circuits in Vannevar Bush‘s *Differential Analyzer* analog computer,[2] which Shannon programmed for users. In 1940 he received his doctorate in mathematics with a thesis on theoretical genetics (*An Algebra for Theoretical Genetics*) at MIT.

After a short stay as a researcher at the Institute for Advanced Study in Princeton, New Jersey, he joined AT&T Bell Labs in 1941 as a mathematician. In 1948 Claude Shannon published his groundbreaking work *A Mathematical Theory of Communication, *which introduced the word “bit” as the fundamental unit of information for the first time.[3] In this paper, he focused on the conditions under which information encoded by a transmitter and transmitted through a noisy communication channel can be restored to its destination, i.e. decoded without loss of information. Shannon showed that adding extra bits to a signal allowed transmission errors to be corrected.[1] He was able to successfully apply the concept of entropy known from physics in information theory. At the same time, he published *Communication in the presence of noise*, in which he combined the representation of frequency-restricted functions by the cardinal series with considerations on maximum data rate, in particular by Harry Nyquist, on a theory of channel capacity in digital signal transmission. Before him, but without his knowledge, Vladimir Alexandrovich Kotelnikov published an identical result in 1933. Accordingly, the sampling rate for a signal must be at least twice as high as the highest frequency contained in it in order to be reconstructed into an analog signal without loss of information (Nyquist Shannon sampling theorem).

Another notable article appeared in 1949, *Communication Theory of Secrecy Systems*,[4] in which Shannon clarified the formal foundations of cryptography, elevating it to the rank of an independent science. Shannon was interested in many things and creative; he is said to have juggled around in the corridors of Bell on a unicycle. Peripheral products of his professional activity include a juggling machine, rocket-driven frisbees, motorized pogo sticks, a machine for reading thoughts, a mechanical mouse, which could orient itself by means of a simple memory consisting of relay circuits in labyrinths, and already in the 1960s an early chess computer. A work from 1950 already deals with chess programs, which was influential and led to the first chess game on computers on the MANIAC computer in Los Alamos in 1956. He also built the “ultimate machine”, a box with a switch that a mechanical hand turned off after it was turned on. The unit of information content of a message, the Shannon, was named after him.

In the mid-1960s he became interested in financial transactions and gave several well-attended lectures at MIT (one of his listeners was Paul Samuelson). He proposed a method, now called *Constant Proportion Rebalanced Portfolio*, to profit from random market fluctuations (after each transaction, the capital was divided into exactly two halves, one for speculation, the other cash reserve). Shannon received many honours for his work. Among a long list of awards were the Alfred Nobel American Institute of American Engineers Award in 1940, the National Medal of Science in 1966, the Audio Engineering Society Gold Medal in 1985, and the Kyoto Prize in 1985.[1]

At yovisto academic video search, there are many references to the work of Shannon. Obviously, because he has laid some of the foundations of computer science (and information theory), and on the other hand there are many basic lectures referring to these topics. But, there is also a very nice documentary about Claude Shannon exploring his life and the major influence his work had on today`s digital world through interviews with his friends and colleagues.

**References and Further Reading:**

- [1] John J. O’Connor, Edmund F. Robertson: Claude Shannon. In: MacTutor History of Mathematics archive
- [2] Vannevar Bush and his Vision of the Memex Memory Extender, SciHi Blog
- [3] Claude Elwood Shannon:
*A Mathematical Theory of Communication*. In: Bell System Technical Journal. Short Hills N.J. 27.1948, (Juli, Oktober), S. 379–423, 623–656. - [4] Claude Elwood Shannon: Communication Theory of Secrecy Systems. In: Bell System Technical Journal. Band28, Nr.4, 1949, S.656–715
- [5] Claude Shannon at zbMATH
- [6] Claude Shannon at the Mathematics Genealogy Project
- [7] Claude Shannon at Wikidata
- [8] Timeline for Claude Elwood Shannon, via Wikidata

The post Claude Shannon – the Father of Information Theory appeared first on SciHi Blog.

]]>