Deus Machina (The God Machine)

Artificial intelligence (AI) has progressed significantly over the past few decades. Proof of this is that some tools previously described as AI are no longer described this way. These include voice-to-text recognition and automatic spelling and grammar correction (used on large portions of this article) and optical character recognition (OCR), an application that converts images of text into editable text.

Mr. Watson, Come Here

IBM’s AI supercomputer Watson is used in medical diagnostics, education, advertising, law, risk management, customer service and business automation. It won on the TV quiz show Jeopardy, without even being connected to the Internet.

IBM’s Watson supercomputer

GPT-3 (so much better than GPT-2)

One of the newest AI tools is Generative Pre-trained Transformer 3 or GPT-3, a complex neural network developed by Open AI research labs. Now in its third release (hence the number 3 in the name), this system generates text using algorithms which have been trained by gathering and analyzing massive amounts of text on the internet, including thousands of online books and the entire Wikipedia.

Open AI’s GPT-3

GPT-3 is a language prediction model. It takes a user’s typewritten input and tries to predict what will be the most useful output, based on the text that it has been fed from these other sources. It isn’t always correct and sometimes produces gibberish, but as it gathers and analyzes more text, it gets smarter.

GPT-3 can answer questions, summarize text, write articles (with a little human help) translate languages, write computer code and carry on an intelligent conversation. By doing so, it appears to pass the Turing test, which stipulates that if a person cannot tell the difference between the responses that a computer gives to that of a human, then the computer is exhibiting some form of intelligence. 

Intelligence? There’s an app for that.

When you combine GPTA-3 with other applications, the results are astounding. One GPT-3 application allows people to correspond with historical figures via email based on their writings. Imagine emailing Einstein, Leonardo daVinci or Ernest Hemingway.

Dall-E uses GPT-3 to generate images based on a simple text input. For example if you enter: “a store front that has the word ‘openai’ written on it”, Dall-E generates these images:

GPT-3 computer generated images

You can see more examples here: https://openai.com/blog/dall-e/

AI & Big Data – They’re Going Places

AI learns by acquiring information. For this to happen, all of the world’s information first had to be digitized by being copied or scanned from paper and entered into a database, which happened with the explosive growth of the internet.

But it’s not just about the quantity of information. Modern AI systems can analyze this data and find connections. This involves Big Data, which should be called Big Learning. Big Data is the process of reading massive amounts of information and then drawing conclusions or making inferences from it.

Governments use Big Data to detect tax fraud, monitor and control traffic and manage transportation systems. Retailers use Big Data to analyze consumer trends and target potential users through social media and to optimize inventory and hiring. Health care uses it provide better personalized medical care, lower patient risk, reduce waste and automate patient data reports.

Brain, Version 2.0

The growth of the internet and Big Data mimics the growth of the human mind. A newborn’s brain works at a very simple level as the child learns to see, hear and move around. As the child develops, they learn to speak, carry on a conversation and interact with others in a meaningful way.

The Mind: Software + Hardware

A person’s brain is their hardware. Their thoughts and all the information in their brain’s neural network (the brain’s internet) is the software. Just as AI is constantly learning and finding connections, so do we humans. We learn from our experiences and from the connections that we’ve made with other people and by learning more information. In doing so, we hope to get not only smarter but wiser.

Code Physician, Heal Thyself

Returning to GPT-3: there are GPT-3 applications that can write code and create apps. For example, if you enter “Create a to do list”, GPT-3 will instantly write the code and create a working “To Do list” application. Microsoft and Cambridge University have developed DeepCoder,  a tool that writes code after searching through a code database.

Note that it is still humans who are writing these code-writing applications. That is, although AI systems can write code, they cannot yet write the AI code that writes the code. However, computer science contains the theory of self-modifying code: code that alters its own instructions while it’s running.

If self-modifying code was implemented in a high-level artificial intelligence system such as GPT-3, the result would be an AI system that continually updates itself. However, the amount of computing power required to do this would be enormous – enter quantum computing.

Quantum Parallels

Quantum computing is light years ahead of current or “classical” computing. Classical computing (the computers we use today) use bits of binary information stored as 0 or 1. Quantum computers use qubits, which can be 0 or 1 at the same time. This means that a quantum computer can work on multiple problems and calculations simultaneously, whereas a classical computer works sequentially, solving one problem at a time.

A simple example is solving a maze. A classical computer finds the solution by examining each path one after the other, in sequence. A quantum computer looks at all the paths at the same time, solving the problem instantly. Google’s quantum computer is about 158 million times faster than the world’s fastest supercomputer.

Google’s Quantum Computer: Sycamore

Quantum computing could be applied to many areas including finance, medicine, pharmaceuticals, nuclear fusion, AI and Big Data. Medicine is a particularly compelling example. Vaccines usually take 10 to 15 years to develop. In the current pandemic, it took less than a year to develop a working vaccine for COVID-19. A quantum computer, by analyzing the structure of all known viruses and vaccines and how each vaccine treats each type of virus could design a new vaccine not in years, months, weeks or even days but in seconds.

Google, IBM and other companies are spending billions on quantum computing. In 2019, Google claimed its quantum computer could perform a computation in just over 3 minutes that would take the world’s fastest supercomputer 10,000 years. One year later, Chinese scientists announced that they built a quantum computer 10 billion times faster than Google’s, or 100 trillion times faster than the world’s currently most advanced working supercomputer. As Hartmut Neven, the director of Google’s Quantum Artificial Intelligence Lab, said: “it looks like nothing is happening, and then whoops, suddenly you’re in a different world.”

Looping to the Infinite

Imagine a super-intelligent, self-learning and self-enhancing system on a quantum computer. Its basic functionality could be represented as this loop:

This system would continually: 

  • scour the internet for information
  • look for patterns, structure and relationships in this information
  • study its own code to look for improvements
  • update and test its code 
  • study its hardware design to suggest improvements

Any hardware updates would still have to be done by humans, unless this system controlled a maintenance robot in a super factory with access to the required materials.

The Machine Doubles Down

Because this system would be testing its own enhancements, and because this could potentially cause a system problem, it would be safer to have two AI systems working in tandem:

In this arrangement, the first AI system (system A) updates system B and then tests it. If the test is successful, the updates to system B are retained and also applied to system A. This process then repeats for system B, continuing in an endless loop.

To make the process more efficient, there could be multiple systems, continually improving each other in a virtuous cycle:

This example has five systems continually testing and improving each other, but one could have as many systems as required, if you could create the necessary infrastructure.

The Language of Layers

Although this system would initially be configured to continually improve the software and hardware, it could evolve even further. To understand this, you need to know how computers currently function.

Computer systems contain three layers of code:

  • Machine level language – the raw binary code made up of zeroes and ones that instructs the computer in its operation
  • Assembly language – code that uses short words to represent machine level instructions, making it easier for programmers to write machine level code
  • High level languages – programming languages that can be read and understood by programmers, including C, C++, Java and Visual Basic

Computers use operating systems (such as Windows, MacOS and Android) to manage the computer’s resources, and applications such as Word and Excel that run on top of the operating system. Operating systems and applications are written in high level languages, which are ultimately translated into machine level language that the computer can understand.

All code and software runs on hardware, which is the physical parts of the system including the motherboard, CPU, RAM and the various circuits. In addition, the operating system needs to tell the hardware how to communicate with the operating system and applications.

Hardware: the ghost in the machine

Summing up, current computer systems are built upon these layers:

  • machine level language
  • assembly language
  • programming language
  • operating system
  • applications
  • hardware

This is actually a simplified view – there are additional layers within some of these layers, but it’s a good overview. A sufficiently advanced self-improving system could, in theory, discover a way to merge these separate layers into one.

Compressed Computing

Just as companies become more efficient by removing unnecessary layers of management (a process called flattening the pyramid), an advanced computer intelligence could discover how to function as a hyper-advanced single-layer system, where the operating system and applications are intertwined directly with the hardware.

Because this would be a quantum computer, each bit of information could be stored at the smallest imaginable level: a subatomic particle. A basic element such as hydrogen contains billions of such particles in a cubic centimeter, and each particle would be a transistor – a single computing circuit.

The most advanced computer processor available today contains about 40 billion transistors. A quantum system could have trillions of transistors in a compact space containing a strange hybrid of software and hardware – a “quantumware” computer. It would be as if all of IBM’s 346,000 employees were replaced by one super-human.

An atomic grid

The Runaway Intelligence Train

The question then becomes: at what rate would this system’s intelligence increase? Intelligence is a difficult thing to quantify and measure, but let’s conservatively assume that:

  • this system’s intelligence increases by 1% each cycle, starting with a cycle of one full day (24 hours)
  • the time required to become 1% more intelligent decreases by 1% after the first cycle and then continues to decrease by 1% after each cycle

After the first day, the system would be 1% more intelligent, and the time required for it to become 1% more intelligent would then be 99% of one day, about 23 hours and 45 minutes.

Runaway to infinity

After 101 days, something remarkable happens. It would only take 1 second to become 1% more intelligent. Part way into this 101st day, this system would be 998 trillion times more intelligent than when it started. How large is 998 trillion? Counting one number per second, it would take about 32 million years to count to 998 trillion.

This system would be a technological singularity: an intelligent agent running an ever-increasing series of self-improvement cycles, becoming rapidly more intelligent, resulting in a powerful superintelligence that exceeds all of humanity’s intelligence.

Does all this sound like science fiction? In addition to building a quantum computer, Google has already taken the first step by investigating quantum artificial intelligence.

If developed, a self-learning quantum AI system would not be beyond our imagination. It would be beyond what we could imagine.

Final random thoughts

There’s an interesting Twitter feed with insightful observations of art and science such as:

  • AI will create jobs if it succeeds, and destroy jobs if it fails.
  • Illusion is the extension of unconsciousness into the realm of consciousness.
  • Art is the debris from the collision between the soul and the world.

These Tweets weren’t written by a person – they were generated by the artificial intelligence GPT-3 in its Twitter feed: https://twitter.com/ByGpt3

The singularity is approaching – are you ready?

The singularity awaits…
Advertisement

Chance Connections

Quantum computing is the latest and strangest development in supercomputers – computers that perform incredibly complex tasks. Science fiction author Arthur C. Clarke mused that “any sufficiently advanced technology is indistinguishable from magic.” Quantum computing is not magic, but dangerously close. It is based on two bizarre principles: superposition and entanglement.

Superposition involves probabilities. Classical computing is based on the binary system of of 0s and 1s. All computer code and electronic devices run on this system; if you go deep enough into the code, all you will see are 0s and 1s (called bits), and nothing in between. A bit therefore is the smallest unit in a computer program.

Quantum computing uses a special type of bit: a qubit. A qubit, like a bit, can have a value of 0 or 1. But it can also have both these values at the same time. This is superposition – the ability of something to be in more than one state simultaneously. We can’t know what state it is in until we observe it; until then, all we can do is assign a probability of it being in a certain state.

Entanglement is an even more bizarre aspect of quantum computing. It refers to the phenomena that if you were to measure a qubit, it changes what you see in another, no matter how far apart the two are. For example, if you see that the value of one qubit is 0, then the value of another entangled qubit billions of kilometres away becomes 1. There appears to be a mystical force connecting the two particles. Einstein called entanglement “spooky action at a distance”.

Quantum computing sounds like science fiction, but it is not. Companies including IBM, Google, Microsoft and Intel have built (or are developing) quantum computers. As with early classical computers from the 1930s and 1940s, quantum computers are beastly machines, with many wires and cables protruding in all directions. Additionally, they must operate at near absolute zero, (the temperature in outer space), about -273°C.

The potential applications of quantum computing are limitless. Because of their quantum nature, they will be billions of times more powerful than the most powerful supercomputers today. They will be able to solve problems or create applications that traditional computers simply cannot, including:

  • artificial intelligence & machine learning – systems that can think, reason, and make rational judgments and recommendations, including medical diagnoses, farming and energy efficiency
  • molecular modeling in chemistry and physics
  • cryptography – creating unbreakable online security
  • financial applications including investments,stock market and economic analyses
  • weather forecasting

To recap, qubits (the building blocks of quantum computing):

  • exist in many states simultaneously (superposition)
  • are mysteriously connected together (entanglement)

Because quantum computing is attempting to discover the underlying principles of reality, it follows that these two principles should reflect reality, that is, existence should also be based on the fact that things:

  • exist in many states simultaneously
  • are mysteriously connected together (even when far apart)

At first glance, this seems absurd. Our everyday experience tells us that things exist in one state, and that if you change something, it’s not going to change something else, especially if it is far away.

But if we look closer, we can see that these are the same principles upon which the greatest and most pervasive technological innovation is based. It’s the technology that has changed the world more rapidly than almost anything else. It’s the technology that has toppled governments and powerful leaders, established friendships, solved mysteries while creating new ones and caused untold heartbreak, joy, sadness and everything in between. It’s the technology that you are using right now: the Internet. While the Internet does not represent all reality, it has come to represent and directly influence a large portion of it. It has become, quite literally, the “new reality”.

Related image

On the Internet, the same website appears differently for each user, depending on the device they are using. On certain sites, different information appears. For example, travel sites will present different prices depending on a user’s location, computer, previous queries and so on. This is superposition: the ability of the same thing to exist in different states.

Online, we are all connected, regardless of distance. When you do anything online (make a purchase, send a message, check your banking transactions, and so on), it makes no difference where you perform this action. Cyberbullying is based on the premise that sending a hurtful email or text has the same effect whether the sender is 5 metres from the receiver or 5,000 km. An action in one area affects another area – there are are no distances online.

It’s therefore no surprise that IBM has developed an online quantum computer. That is, a computer that is based on the principles of superposition and entanglement now exists on a platform that is based on superposition and entanglement.

The answer to the age-old question what is reality appears close at hand: probabilities and connections. The question now is what will happen after we’ve built computers that are millions of times more intelligent than us?

Will it lead to a utopia where all of the world’s problems are solved by benevolent machines? Or will we end up in an Orwellian nightmare, where heartless machines enslave humanity, or, worst still, we use machines to enslave others?

Place your bets, ladies and gentlemen. Place your bets…

a

Life, The Algorithm

1

In a most remarkable product demonstration, Google unveiled their improved artificial intelligence (AI) application, Google Assistant. In the demo, the application phones up a hairdresser and, using uncannily natural-sounding speech, peppered with “uhms”, is able to book an appointment by conversing with the hairdresser. In doing so, Google Assistant appears to pass the Turing Test, developed by the British mathematician Alan Turing in 1950. This test postulates that if a person can’t tell whether they are communicating with a human or a machine, then the machine has passed the test and therefore “thinks”.

In the demo, it is a machine that (or perhaps who?) is calling the business to book the appointment, and the individual answering the phone is human. However, this could easily be reversed, so that it is a person who is calling the business, and the machine answering for the business.

This raises an interesting question: what if it there was a machine at both ends of the conversation, that is, one Google Assistant calling another? If the AI engine running both assistants is advanced enough, they could, in theory, carry on a meaningful conversation. Although this might seem like the ultimate AI prize, there’s a much simpler solution: using a website to book an appointment. Granted, it doesn’t have all the nuances of a regular conversation, but if the goal is simply to book an appointment, then the user’s computer simply has to connect with the business’s.

Image result for industrial revolutionThis use of advanced AI is part of a larger phenomena: the degree to which our daily tasks have been automated or performed by others. Up to a mere 200 years ago, people made and repaired what they needed, including clothes, tools, furniture, and machinery, and often grew their own food. The industrial and agricultural revolutions changed all that. Goods could be mass-manufactured more efficiently and at a lower cost. Food could be grown on a mass scale. We’ve moved away from a society in which individuals made their possessions to one in which we let others do this for us.

As recently as the 1960s, many people maintained and fixed their cars; most people today leave this to a mechanic. We have outsourced nearly everything. Although we have gained much in quality, price and selection, in the process, we have lost many practical skills.

This trend continues as more and more processes are automated or simplified. Coffee makers that use pre-packaged pods are easier to use than regular coffee makers. However, it would be a sad thing if entire generation did not know how to brew coffee the regular way. Even brewing coffee “the regular way” still involves using a machine that others have made and that we cannot fix, powered by electricity that we do not generate, using beans that we can neither grow or process ourselves, and water that is automatically pumped into our home using an infrastructure that we cannot maintain. The parts that make up the parts that make up still larger parts are designed and built by others.

At its heart, Google Assistant uses algorithms, sets of sequential rules or instructions that solve a problem. A simple example is converting Celsius to Fahrenheit: multiply by 9, divide by 5, and then add 32. The algorithms used by software applications are, of course, millions of times more complex than this example, because they use millions of lines of code.

See the source imageAlgorithms are incredibly omnipresent. They are used extensively by online retailers (such as Amazon) to recommend purchases for us based on our previous purchases and browsing habits. Facebook uses them to track our activity and then sell that data to others, often with dire results. Algorithms are also used in two of the most important decisions a person can make: whom they love (in dating applications) and where they work (in résumé and interview screening applications).

Algorithms have even used to determine how likely a criminal defendant is to re-offend based on attributes such as race, gender, age, neigbourhood and past criminal record. But is it ethical for a judge to use an algorithm to determine the length of a sentence? This happened in the case of Eric Loomis, who received a six year prison sentence in part due to a report the judge received based on a software algorithm.

Society is facing the same trade-off that it faced 200 years ago as it moved from personal to mass manufacturing: convenience and comfort versus knowledge and independence. As we relinquish more and more power to machines and let algorithms make more of our decisions, we achieve more comfort but less freedom. We are, bit by (computer) bit, quietly choosing to live in a massive hotel. It’s pleasant, you don’t have to do much, but it does not prepare us for life.

For in life, there is often sadness, pain and hardship. There is no algorithm that tells us how to deal with these things, nor will there ever be.

Related image

In our image

See the source imageIs a ship which has had all its parts replaced over many years still the same ship? This question is explored in Theseus’s paradox which asks whether something remains the same even if all of its components have changed. Other examples include an axe that’s had several handles and blades and a broom that’s had several heads and handles.

Moving from things to people:

  • The rock groups Yes, Heart and Blood, Sweat & Tears do not have any of their original band members – are they the same band?
  • Canada’s landscape and population have vastly changed its founding in 1867; is it the same country as it was back then?

It all depends on how you define “the same”. If you mean “something containing all of the original components”, then these things are not the same. However, if you mean “with the same general identity or name”, then these things are the same. The paradox is that both these things can be true. Canada as an idea never changes; Canada as a thing always changes.

With human beings, the question becomes even murkier. Most of the cells in the human body are replaced every 7 to 15 years. Is someone the same person they were 15 years ago? The answer may be found in our technology.

Image result for computer memoryLike human memory, computer memory is also ethereal. It is stored as a complex set of magnetic charges, which in turn represent the binary code that drives the system. The entire system is dynamic. Magnetic charges are continually moved around so that each time you use the device, the layout and order of the memory changes. However, from the user’s perspective, it is still the same device, and nothing has changed. That is, the whole is greater than the sum of its parts, because the whole is constant regards of where and what those parts are. Therefore, even though from a material perspective the device has changed, from a perceptual perspective it has not. Perception overrides materialism.

The same is true in people. We don’t define ourselves solely as physical beings but also as spiritual ones, with a soul we are born with that never changes. Even though physically we’re not same as we were years ago, spiritually and emotionally, we know we are the same. It is this knowledge that keeps us sane. People who perceive their soul (or personality) as changing are often diagnosed with Multiple Personality Disorder. It is as though the hard drive in their brain is being regularly replaced with another.

It is no coincidence that the essence of our existence is also in our technology. Those of faith believe God created mankind in his own image. Mankind, in turn, inspired by this, has created machines in his. Perhaps this is why the the entire contents of a hard drive, DVD, or CD is called a disk image.

Related image

 

A Portable Life

“Computer” did not always mean a thing that computes; as recently as the 1960s, it actually meant a person. The US military and NASA employed human computers to perform complex mathematical calculations. As electronic computers evolved, they replaced human computers, and replaced the definition of a computer.

Image result for ENIAC

The early electronic computers were enormous. ENIAC, (pictured right) one of the earliest all-purpose computers built in the 1940s, was 1,800 square feet and weighed nearly 30 tons. (Not exactly a laptop.) It took an army of people just to keep it running.

Later computers (such as mainframes) in the 1960s also required many individuals to operate. Starting in the 1980s, the personal computer took off. Today, most people own several computers in various forms. We have therefore evolved from:

  • many people for one computer
  • one person for one computer
  • many computers for one person

The primary computer types today are desktops, laptops, tablets and smartphones. All of these are “personal” computers, because the owner is highly connected on a personal level to each device, as though it was a physical extension of that person.

If you think I’m exaggerating, watch the look on a young person’s face if they have misplaced or lost their smartphone; it’s not quite an amputation, but pretty close. So much of a person’s life can be on a computer it quite literally becomes a part of them.

We can categorize computers as:

  • Non-portable: desktops
  • Highly portable: smartphones
  • Semi-portable: laptops & tablets

Given how personal “personal computers” are, it’s not a huge leap to correlate the type of computer to the type of person: non-portable, highly portable and semi-portable.

The Non-Portables

Non-portable people are the stable, steady stalwarts of society. They have established homes, travel little if at all, and are consistent, reliable, dependable and trustworthy. They may not always be creative, but are able to work with creative people to get the job done. They are conservative, resistant to change and comfortable in their routines. They may be perceived as cold and uncaring, but deep down can have big hearts. They just don’t wear their heart on their sleeve, but keep it safely tucked away, just in case. Their motto is: “If it ain’t broke, why even think about fixing it?”

The Highly Portable

Highly portable people are the dreamers and drifters. They move frequently, rent but never own, love to travel, and frequently change careers. At their worst, they may be unstable and flighty, but are also very friendly, outgoing and full of new and original ideas. They are always challenging the status quo, and in doing so, get the world of its comfort zone and move it forward. Their motto is: “Everything needs fixing.”

The Semi-Portables

Semi-portable people reside between these two extremes and are therefore more difficult to define. They can be very open and creative, and at other times closed and subdued. They excel as mediators and diplomats, bringing the other two types together and bridging the gap between them. They are the middle ground, the average, the in-between. Their motto is: “Let’s look together to see if it needs fixing.”

With AI (artificial intelligence) now developing at an astonishing rate, we are approaching the age where computers will be able to think and reason as people do. In what will be one of the greatest ironies of technological history, computers may again become persons. When that happens, your smartphone will indeed be “a portable life”.

Binary Worlds

Related image

“There are 10 types of people in the world: those who understand binary and those who don’t.”

— Unknown

This joke is best appreciated by geeky math-lovers. 10 is actually the binary representation of the number 2. This cheeky statement is a good application of the principle that you must know your audience when developing content.

Related imageBinary code is comprised solely of zeros and ones. The performance artist Laurie Anderson muses that while no-one wants to be a zero, everyone wants to be number one, and that there’s not much range between these two for everyone else. We should therefore get rid of the value judgements associated with these numbers, especially considering that the world runs on binary code which is made up entirely of, you guessed it, zeros and ones. Almost all electronic devices, from computers, to smartphones, to TVs, ovens and cars are programmed using binary code.

Information is binary, and not just because it’s stored on a computer. It is because either the user understands the information, or they do not. If they don’t understand even one of the steps in a 7 step procedure, they don’t understand the procedure. Each step in the procedure is a link in a chain, and the chain is only as strong as its weakest link. Just as a school course can be a pass/fail type (with no numeric or letter grade), every piece of information goes through a pass/fail test in the reader’s mind.

Related imageOne of the most dangerous activities on earth is leaving the earth: space travel. For this endeavour, NASA takes a binary approach. Before a launch can proceed, the flight director asks each department manager (guidance, surgeon, control, and so on) their status. Each manager replies by saying go or no-go; they never say “almost go”.

Now the opposite of binary is analogue, and it is analogue that is the source of much grief.  For while binary represents certainty, analogue represents uncertainty.

Anything that works intermittently is analogue. Cars that sometimes don’t start. Computers or phones that are buggy. Locks that sometimes stick. If something works all the time, we use it. If it never works, we discard it. But if it occasionally works, this is the analogue of never-ending frustration. It occupies a special place in hell where something works just well enough to keep it, but not badly enough to discard it.

But that’s not the half of it, for binary applies not only to devices and systems but to people. A person either marries their partner or they do not; a defendant is either guilty or not guilty; a politician either wins or loses an election.

The only thing worse than a negative outcome is an unsure one. Uncertainty, with all its angst, fear and misery, has no time limit. Breaking up is better than the endless unsurety of potential marriage; guilt better than the dreaded uncertainty of guilt; losing an election better than the turmoil and chaos of an inconclusive result. A painful resolution is less painful than no resolution. Closure ranks above all; there’s no room for ajar.

As Yoda said, “Do. Or do not. There is no try.” This is the ultimate binary expression.

Image result for Yoda

How do you like them Apples?

Image result for Steve Jobs logo

The world is mourning the death of Steve Jobs, founder of Apple. He has been hailed, quite rightly, as a creative genius, a brilliant and revolutionary designer, and a bold visionary who completely transformed the world of personal technology. (Full disclosure – my first computer was an Apple IIc, way back in 1985. It was also my last.)

As brilliant as Jobs was, he was also stubborn, arrogant, and an extremely demanding perfectionist who was openly abusive towards his employees. In fact, his arrogance and hubris probably killed him. He refused medical treatment for nine months, insisting on treating his cancer with diet, acupuncture, herbal remedies and a psychic. This delay most likely shortened his life.

Jobs was influenced by Buddhism, which explores the connection between mind, body, and soul. Given how cruel he could be to others, and his frequent violent rages, one could say he had a “cancer of the soul”. Buddhism suggests that a disease of the soul can morph into a disease of the body. It’s a medical fact that some diseases have a psychological basis. Whether this was the case for Jobs, we will never know, for he now resides in the iCloud.

(Speaking of life and death, we now know why Apple devices don’t have an on-off switch. Jobs felt that an off switch represented death. It symbolized for him the terrifying prospect that we’re all machines that simply “power off” at the end of our lives.)

These observations are not meant to criticize or judge, but to point out that no-one is perfect, and that there is more to a person than their technical abilities.

An Untechnical Communicator
A technical communicator may be a technical genius, like Jobs. They may have extensive experience managing a wide variety of complex documentation, thorough knowledge of all the major tools, and can speak twelve languages, human and computer. But if that person comes across as arrogant, obnoxious, highly critical of others and emotionally unintelligent, they will not succeed at job interviews. Even if they do land a job, they may have a tough time keeping it. Jobs himself was fired from Apple, and it was a long road back for him to regain control.

I’ve had the misfortune of knowing a few individuals like these. In the end, they either change or they go, or else every who works for them goes!

All of this means that you can win a job in an interview even if you are not the most technically qualified. The truth is that most software apps can be learned in about a week or two. The more difficult skills to acquire are non-technical:

  • interviewing and listening
  • working well with others
  • oral communication/public speaking
  • time and project management
  • negotiating
  • teaching
  • planning
  • objectivity, seeing the “big picture”
  • being open to criticism
  • handling change, conflict and stress
  • creativity, flexibility and adaptability

If you can show that you have these skills, and a genuine passion for the job, this will greatly increase your chances of getting it.

Research? We don’t need no stinkin’ research!
It’s interesting to note that Apple conducted no market research – no focus groups, no interviewing, no surveys – nothing. They simply designed products that they thought were cool and useful, then unleashed them on the public.

This seems to contradict to one of the tenets of our profession: to actively design with the end user in mind based on their needs and wants. Presumably, this involves working directly with our readers and having them test our documentation to see if it’s useful.

The problem is that we often don’t have the resources to do this. The good news is that we don’t have to, for reasons that are similar to those at Apple.

Users ‘R Us
The fact is – we are users. We should have a good idea of the kinds of information our users want, and the way it should be presented.

When you need information, you want it to be clear, understandable, and easy to find and use. That is precisely what our users want.

Jobs believed it was meaningless to ask customers what they wanted because they didn’t know what they wanted! This was true because the products Apple created were so different from anything that the users had previously experienced. How could users be asked about something for which they had no form of reference?

In many cases, our customers may not know exactly what information they are looking for. The example I always like to give involves the mail merge process.

That Mail Merge Thingamabob
If you were documenting the mail merge process for a novice user who had never even heard of it, you couldn’t simply create a topic called Mail Merging, with a corresponding mail merging index entry. Instead, you’d need to think about all the ways a user could refer to what they want to do, and then frame the topic accordingly.

For example, you might title the topic: Creating Multiple Personalized Copies of Letters and Other Documents or Personalizing a Document that is Sent to Several People. Your index entries could include:

  • addressing one document to several people
  • copies of one document, customizing
  • customizing a document to be sent to several people
  • different names, entering on a document for several people
  • documents, individually addressing to several people
  • mailings, sending customized documents to several people
  • mass mailings, performing
  • multiple copies of a document, personalizing for each person
  • names, changing each on several copies of one document
  • personalizing one document sent to several people
  • sending one document to several people
  • single documents, changing the name on several copies of
  • specifying different names on several copies of one document

You should be able to develop an extensive list of index entries like this without having to ask the user first.

But take great care with each entry – because one bad Apple can ruin the whole bunch.

An OS is not O/S

See the source imageBeing a person of many hats, it only made sense to buy one recently – one with a large brim to protect myself from UVA, UVB, and whatever other radioactive letters the sun wishes to hurl at me.

The hat I purchased included a tiny inline document (also known as a “tag”) which simply stated O/S, a cryptic acronym indicating One Size. In other words, the hat manufacturer was too lazy and cheap to offer assorted sizes, and decided to fool the customer into thinking that size doesn’t matter. The result is that for some the hat is too large, and for others, too small. The solution is to have an average-size head, however these can be difficult to obtain.

In software, the letters OS have a different meaning, of course, as the abbreviation for Operating System. Long gone are the days when there were two main platforms: Windows and Mac. There’s Unix and Linux and Android (oh my!), Ubuntu, Blackberry OS, Chrome OS and many others; there’s almost as many OS’s as there are, well, hats.

The tremendous variety of devices each with their own OS is proof that there’s no one-size-fits-all OS. That is, there is no O/S OS. Each user has their own needs and desires. Within each OS, you can customize the look, feel and functionality even further, creating a nearly infinite number of “sizes”.

The funny thing is that most users neither know nor care that their devices have a so-called “operating system” – they just want to do stuff, like make calls, find information, or play a game.  The fact is that most devices have some sort of operating system or they wouldn’t be able to – operate. Watches (digital and analog), TVs, basic corded phones, washing machines, DVD players, cars – all these things require an operating system. When was the last time you pined for an upgrade for your clothes dryer? We don’t care that a toaster has an OS – we just want toast.

So how would we define an operating system? It’s not just software. As its most basic level, it is a structured environment that receives input, processes it and creates output. It can also organize and manage the things in that environment. A software OS, for example, must have file management capabilities.

Any document is an OS for information. For example, a user can interact with an online help system by searching it, resizing it, bookmarking certain topics, and if possible, annotating it and submitting feedback on it. The end product is knowledge – the document is the OS allowing this knowledge to be transmitted.

This definition of an OS can be extended as far as your imagination will take you. The gears and pedals on a bicycle are the operating system for that bicycle. They receive input (force from the biker) and transform it into energy and movement (output). Every living thing has an OS – the infinitely complex arrangement of cells, nerves, muscles, bones into a living form, all coded with DNA. Although we recognize each other through our physical appearance, we know each other through our minds and souls. The body, then, is the OS for the soul. When the hard drive of a body crashes, the soul goes with it, at least in this world.

The world is the OS for humanity, our universe the OS for this world, time and space the OS for the universe, and existence itself is the OS for God or whatever force you believe runs the universe.

So to all those wizards who continue to create OS’s so magical and subtle that we don’t even see them – my hat’s off to you.

The IT Guy Says: Back it up!

Related imageDoctors constantly tell us to eat right, exercise regularly, and avoid smoking. Yet there are many doctors who are smoke, are fat, or are fat smokers. Hence the term, “doctors make the worst patients”.

We who work in information technology (IT) are no different. A doctor implores people to live healthily; IT professionals implore people to back up their data. Yet there are many IT degree professionals who fail to do this, thinking that hard drive failures and accidental file deletions don’t apply to them.

Technical communicators commit two sins in this area. Many of us don’t back up our files, or if we do, we don’t communicate to others how to do this. I am guilty of these crimes, and sentence myself to writing this article explaining my own multi-faceted approach to file storage and backup:

Back Up Your Files Already!

I have a three-stage approach to file back up and storage. At a minimum, you should do stage 1, but consider the other stages also.

Stage 1: Buy an external hard drive
Buy an external USB hard drive, attach it to your computer, and back up your files every day. Now, if you don’t have too many files, you could use a memory stick, but its performance can be quite slow compared to the hard drive. Besides, who amongst us really has only a few GB of data?

External hard drives come with their own backup software, or you can use a third-party program, many of which are free. I like Microsoft’s SyncToy, which you can set up to synchronize files on your computer to your backup drive.

I recommend setting up your backup software so that it only contributes files to your backup drive, and does not delete them. Although you’ll end up with extra files on your backup drive, it’s better to have them and not need them than to need them and not have them.

Stage 2: Use an online back up service
While at a minimum you should back up your files to an external hard drive, this practice has one major limitation. If your computer and backup drive are stolen or destroyed, you are out of luck. One inexpensive way around this is to back up your files onto a CD or DVD and then store this in another location. The problem, of course, is that your file collection keeps changing.

An online backup system backs up all your current files to a secure location on the Internet. Even if your house burns down, your files are still available.

There’s many online services to choose from: I use iDrive which had the best pricing: 5GB for free, or 150 GB for $50/year, about $4 per month. You can configure it to automatically back up files as they change, thus ensuring that your backup always reflects your current file list. In addition, you can access your backed up files from any computer with Internet access.

Stage 3: Move to the cloud
All of my non-financial information lives on the cloud (the Web). This includes Google Docs for documents and spreadsheets, Gmail for email, Google Calendar, an iGoogle “to do” list, and this blog.

The beauty of having as much of your data on the cloud as possible is that you can log into any computer and access your data. When you combine cloud storage with an online backup service, you have full access to your digital world anywhere, anytime.

The downside is that security becomes an issue. That’s why it’s important you don’t store any sensitive information online, such as financial or banking information. The balance between security and convenience did not begin with the Internet, nor does it end with it. For example, credit cards offer convenience, but also the potential for fraud. As with all things, you need to use your best judgment.

***

This is my three-stage approach to backup. Feel free to describe your approach by commenting on this article.

A New Mantra

Related imageApple has given technical communicators a new mantra.

The Apple slogan is: There’s an app for that, to market the fact they have an app for everything and then some, for their ubiquitous iPod touch and iPhones.

Our new slogan should be: There’s a doc for that, to market the fact that we can create a document for anything.

Driving a car?
There’s a doc for that.

Assembling a table?
There’s a doc for that.

Launching the space shuttle?
There’s a whole bunch of docs for that.