The Paradoxical Universe

A paradox is a statement or idea that contains an inherent contradiction. Some examples are:

The paradox of choice: Buyers want choices, but too much choice is overwhelming and prevents buyers from making a choice.

The paradox of tolerance: A society that tolerates everything must also tolerate intolerance. Is a society that tolerates intolerance tolerant or intolerant?

The chicken and egg paradox: The classic biological paradox: which came first, the chicken or the egg? I’ve always wondered why chickens were chosen for this paradox. No one asks if the kitten or cat came first, the puppy or the dog, or the baby or the adult.

The machine-tool-construction paradox: This is similar to the chicken and egg paradox. Every machine that exists was built using tools made by other machines. How could the first machine have been built if there were no machines before it to make the tools needed to build the first machine?

Perhaps the simplest paradox is the sentence: This sentence is false. If it’s true, then it’s false, and if it’s false, then it’s true, in an endless loop. 

A visual paradox – how does the water keep flowing?

Going in circles

One cause of this paradox is the fact that “This sentence is false” refers to itself in a strange form of circular reasoning. Circular reasoning is a flawed argument where the conclusion of the argument is assumed to be true, and which then uses this conclusion to “prove” the premise.

Examples of circular reasoning are:

John must be guilty because he was arrested. This argument assumes that only guilty people are arrested, and that if someone is arrested, they must be guilty.

You should obey the law because it’s illegal to break the law.  By definition, if you don’t follow the law, you are committing an illegal act, so this statement does not argue anything.

All circular reasoning takes the form: A = B because B = A. The sentence “This sentence is false.” is a one-sentence version of this form.

Circular reasoning

This is not a heading

One way to resolve “This sentence is false” is to declare it is not really a sentence, or at least not one with any meaning.

It is possible to group words together into a sentence that has no meaning. Noam Chomsky provides a good example: Colorless green ideas sleep furiously. This sentence is grammatically correct but meaningless. Therefore, we could say that the sentence “This sentence is false” is null and void. We thereby sentence this sentence to exile in the land of forbidden sentences.

However, this smacks of circular reasoning, because we’re effectively saying: This sentence is not a sentence because it’s not a sentence. Our desire to resolve this and other paradoxes reflects a larger world-view, that there should be no contradictions in life. However, our entire world is based on contradictions and paradoxes.

Let there be light

One of the most ubiquitous things in our universe is light. However, there is a fundamental contradiction in the nature of light. Depending on how you observe it, light is a wave or a particle. This contradiction in light’s structure is known as wave-particle duality

Wave-particle duality relates to another area of physics, quantum mechanics, which has more paradoxes. A principle of quantum mechanics is that a particle can be in more than one place at the same time, a completely paradoxical existence.

It’s big, relatively speaking

Quantum mechanics is the study of the very small: atoms and their components. General relativity, by contrast, is the study of very large objects like planets and galaxies.

Because we live in one universe, there should be one set of rules governing everything. However, there are fundamental contradictions between quantum mechanics and general relativity.

In quantum mechanics, time flows consistently, whereas in general relativity, time can be bent using matter and energy. Quantum mechanics is based on the uncertainty principle, which states that you cannot know the exact speed and location of a subatomic particle at the same time, an idea incompatible with general relativity, where speed and location are both known.

Let’s get small

Quantum mechanics also includes the bizarre concepts of Planck length and Planck time, both enormously small units of measure.

A Planck length is 1.6 × 10−35 meters or 0.00000000000000000001 times the size of a proton. If an atom was the size of the earth, a Planck length would be the size of a proton.

Planck time is one 10−43 of a second. One unit of Planck time is to a second what a second is to 300 billion trillion years, a time far greater than the age of the universe.

Planck length and Planck time are thought to be the smallest possible intervals that can be measured. If you tried to go any smaller, the laws of physics would break down.

There are also limits on how large space and time can be. Although the universe is very large and very old, it is not infinitely large or old. It is estimated to be 93 billion light-years wide, and 14 billion years old.

The fact that space and time, the most elemental aspects of reality, have an upper and lower limit implies that there is a cosmic minimum and maximum resolution to the universe, much like a TV has a minimum and maximum image resolution, beyond which it cannot go.

Whether these limits are by design or simply an inherit aspect of our reality remains a mystery.

Magnified TV resolution

Applied science

Scientists are trying to unify quantum mechanics with general relativity into a grand “theory of everything” that would resolve the fundamental differences between them. Still, despite these differences, each area has led to amazing discoveries.

Applications of quantum mechanics include lasers, electron microscopes, solar cells and quantum computing. General relativity is applied to cosmology (the study of the origin and evolution of the universe), the study of black holes, nuclear fusion (a potential limitless source of clean energy) and GPS (Global Positioning Systems).

The application of GPS is particularly salient. A GPS helps us navigate through new and strange territory; to find our way. This is exactly what a grand unified theory of general relativity and quantum mechanics would do – decode exactly where we fit in the universe.

Scientists have made enormous advances using both these areas, despite the fact that they contradict each other. There is no conflict between progress and contradiction. Science takes the best of both worlds, large and small.

Feeling conflicted

Contradictions are useful not just in science but in life. We live with, indeed we thrive, with contradictions.

We must be assertive but flexible. We must be independent but social. We judge but are compassionate. We seek challenges but also comfort. We must fit within society but also question the status quo and change it. We have free will, but are strongly influenced by others and outside forces.

Through these contradictions, we grow and mature. We do not need to choose one or the other – we need both.

Intelligence 2.0

Both science and humanity are facing their own paradox with the explosive growth of AI (artificial intelligence). DallE and MidJourney AI create astounding images from descriptive text. ChatGPT gives detailed, coherent responses to questions and requests, including general queries, poems and short stories.

Here is an example of an image created using MidJourney:

Have cake, will eat it too

Depending on what output you are observing, AI sometimes produces superior output than a human can, and sometimes does not. To resolve this paradox, we can have AI and humans working together.

A chess player working with an AI will play better than a person or an AI separately. Doctors work with AI medical systems such as IBM’s Watson to diagnose diseases.

There is a particularly remarkable example of humans working with AI. Music historians, musicologists, composers and computer scientists created an AI to analyze Beethoven’s unfinished 10th symphony and all his other works. Based on the patterns the AI found, it generated a musical score on what it thought the remainder of the unfinished symphony should be. The musicians constantly reviewed and updated the AI’s score using their extensive human musical skills. You can see a portion of the remarkable final result here.

We can go even further. Humans are not only working well with AIs, they are combining different AI systems together to produce complex new forms. People have used ChatGPT to write descriptive stories, then copied these descriptions into an AI image generator, creating completely original graphic novels. Similar combinations of AIs can create sounds and videos. In the near future, we may be able to use a single AI to instantly create entire movies with coherent stories, images, music and sound.

I think, therefore I doublethink

As we’ve seen, in science and in life, we use contradictory approaches to solve problems and answer difficult questions. This is doublethink, the ability to hold two contradictory thoughts simultaneously, a concept from George Orwell’s dystopian classic 1984.

Doublethink is illustrated in a scene in the novel where Winston, held captive by O’Brien, says that two and two are four. O’Brien calmly responds:

“Sometimes, Winston. Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once. You must try harder. It is not easy to become sane.”

Doublethink surely is not a good thing, then, is it?

It depends – in fact, you can use doublethink to describe itself:

  • Doublethink is a dangerous form of thinking that leads to contradictory ideas.
  • Doublethink is a productive form of thinking that leads to new ideas and tangible benefits.

Both of these things are true, and not true.

The Big Question

Science uses doublethink to describe the nature of light and reality. But could doublethink answer the question of what gives life meaning and purpose, and whether God exists?

On these ultimate questions, the main belief systems include:

Theism – The belief that God exists and is the source of all meaning and morality.

Atheism – The belief that God does not exist. Life has no meaning or we must invent one.

Agnosticism – The belief that we cannot know if God exists or not. Life may or may not have meaning.

HumanismThe belief in the worth and dignity of humans individually and collectively, rather than God, with an emphasis on critical thinking, reason, individual freedom and free thought, and the scientific method.

Ignosticism – This is similar to agnosticism in that it states that it’s impossible to know if God exists. However, it goes further by saying that the question itself is meaningless, because it’s impossible to know exactly what God is or what the nature of his existence could be.

All these views have their strengths and weaknesses. For me, the three most compelling are humanism, agnosticism and theism.

But which one of these is “correct”?

Pick a door, any door

Quantum theology

I suggest a new belief system, quantheism, which states that depending on the observer and the situation, God exists or does not exist.

When we observe the awesome complexity of the universe, the planets, stars and galaxies, the incredible variety of life, and the sense of community and history within our faith, God flows into existence.

When we see that most people, including millions of children and other innocents, suffer in poverty, misery, war, natural disasters and disease, often dying young, God slips out of existence.

When we recognize the challenge of knowing that there must be a right and wrong, but not having an outside source of morality, God flows into existence. When we observe that people can behave rightly or wrongly regardless of whether they are religious or secular, God fades away.

When we struggle between these two views, we are agnostic, and God moves into a quantum state, neither existing nor not existing.

Theism provides an order and structure to the world, a sense of wonder, a belief that there is more out there than what we experience with our physical senses; it give us a sense of hope, of connection to an infinite guiding force, and the knowledge that this life is not all there is.

Humanism gives us rational thought, the scientific method, personal freedom, tolerance, self-determination, and personal responsibility.

Agnosticism gives us the freedom to doubt, and freedom from absolutes.

So am I a theist, humanist, or agnostic?

Sometimes I am a theist. Sometimes I am humanist. Sometimes I am agnostic. Sometimes I am all of them at once.

It is not easy to become sane.

Advertisement

Deus Machina (The God Machine)

Artificial intelligence (AI) has progressed significantly over the past few decades. Proof of this is that some tools previously described as AI are no longer described this way. These include voice-to-text recognition and automatic spelling and grammar correction (used on large portions of this article) and optical character recognition (OCR), an application that converts images of text into editable text.

Mr. Watson, Come Here

IBM’s AI supercomputer Watson is used in medical diagnostics, education, advertising, law, risk management, customer service and business automation. It won on the TV quiz show Jeopardy, without even being connected to the Internet.

IBM’s Watson supercomputer

GPT-3 (so much better than GPT-2)

One of the newest AI tools is Generative Pre-trained Transformer 3 or GPT-3, a complex neural network developed by Open AI research labs. Now in its third release (hence the number 3 in the name), this system generates text using algorithms which have been trained by gathering and analyzing massive amounts of text on the internet, including thousands of online books and the entire Wikipedia.

Open AI’s GPT-3

GPT-3 is a language prediction model. It takes a user’s typewritten input and tries to predict what will be the most useful output, based on the text that it has been fed from these other sources. It isn’t always correct and sometimes produces gibberish, but as it gathers and analyzes more text, it gets smarter.

GPT-3 can answer questions, summarize text, write articles (with a little human help) translate languages, write computer code and carry on an intelligent conversation. By doing so, it appears to pass the Turing test, which stipulates that if a person cannot tell the difference between the responses that a computer gives to that of a human, then the computer is exhibiting some form of intelligence. 

Intelligence? There’s an app for that.

When you combine GPTA-3 with other applications, the results are astounding. One GPT-3 application allows people to correspond with historical figures via email based on their writings. Imagine emailing Einstein, Leonardo daVinci or Ernest Hemingway.

Dall-E uses GPT-3 to generate images based on a simple text input. For example if you enter: “a store front that has the word ‘openai’ written on it”, Dall-E generates these images:

GPT-3 computer generated images

You can see more examples here: https://openai.com/blog/dall-e/

AI & Big Data – They’re Going Places

AI learns by acquiring information. For this to happen, all of the world’s information first had to be digitized by being copied or scanned from paper and entered into a database, which happened with the explosive growth of the internet.

But it’s not just about the quantity of information. Modern AI systems can analyze this data and find connections. This involves Big Data, which should be called Big Learning. Big Data is the process of reading massive amounts of information and then drawing conclusions or making inferences from it.

Governments use Big Data to detect tax fraud, monitor and control traffic and manage transportation systems. Retailers use Big Data to analyze consumer trends and target potential users through social media and to optimize inventory and hiring. Health care uses it provide better personalized medical care, lower patient risk, reduce waste and automate patient data reports.

Brain, Version 2.0

The growth of the internet and Big Data mimics the growth of the human mind. A newborn’s brain works at a very simple level as the child learns to see, hear and move around. As the child develops, they learn to speak, carry on a conversation and interact with others in a meaningful way.

The Mind: Software + Hardware

A person’s brain is their hardware. Their thoughts and all the information in their brain’s neural network (the brain’s internet) is the software. Just as AI is constantly learning and finding connections, so do we humans. We learn from our experiences and from the connections that we’ve made with other people and by learning more information. In doing so, we hope to get not only smarter but wiser.

Code Physician, Heal Thyself

Returning to GPT-3: there are GPT-3 applications that can write code and create apps. For example, if you enter “Create a to do list”, GPT-3 will instantly write the code and create a working “To Do list” application. Microsoft and Cambridge University have developed DeepCoder,  a tool that writes code after searching through a code database.

Note that it is still humans who are writing these code-writing applications. That is, although AI systems can write code, they cannot yet write the AI code that writes the code. However, computer science contains the theory of self-modifying code: code that alters its own instructions while it’s running.

If self-modifying code was implemented in a high-level artificial intelligence system such as GPT-3, the result would be an AI system that continually updates itself. However, the amount of computing power required to do this would be enormous – enter quantum computing.

Quantum Parallels

Quantum computing is light years ahead of current or “classical” computing. Classical computing (the computers we use today) use bits of binary information stored as 0 or 1. Quantum computers use qubits, which can be 0 or 1 at the same time. This means that a quantum computer can work on multiple problems and calculations simultaneously, whereas a classical computer works sequentially, solving one problem at a time.

A simple example is solving a maze. A classical computer finds the solution by examining each path one after the other, in sequence. A quantum computer looks at all the paths at the same time, solving the problem instantly. Google’s quantum computer is about 158 million times faster than the world’s fastest supercomputer.

Google’s Quantum Computer: Sycamore

Quantum computing could be applied to many areas including finance, medicine, pharmaceuticals, nuclear fusion, AI and Big Data. Medicine is a particularly compelling example. Vaccines usually take 10 to 15 years to develop. In the current pandemic, it took less than a year to develop a working vaccine for COVID-19. A quantum computer, by analyzing the structure of all known viruses and vaccines and how each vaccine treats each type of virus could design a new vaccine not in years, months, weeks or even days but in seconds.

Google, IBM and other companies are spending billions on quantum computing. In 2019, Google claimed its quantum computer could perform a computation in just over 3 minutes that would take the world’s fastest supercomputer 10,000 years. One year later, Chinese scientists announced that they built a quantum computer 10 billion times faster than Google’s, or 100 trillion times faster than the world’s currently most advanced working supercomputer. As Hartmut Neven, the director of Google’s Quantum Artificial Intelligence Lab, said: “it looks like nothing is happening, and then whoops, suddenly you’re in a different world.”

Looping to the Infinite

Imagine a super-intelligent, self-learning and self-enhancing system on a quantum computer. Its basic functionality could be represented as this loop:

This system would continually: 

  • scour the internet for information
  • look for patterns, structure and relationships in this information
  • study its own code to look for improvements
  • update and test its code 
  • study its hardware design to suggest improvements

Any hardware updates would still have to be done by humans, unless this system controlled a maintenance robot in a super factory with access to the required materials.

The Machine Doubles Down

Because this system would be testing its own enhancements, and because this could potentially cause a system problem, it would be safer to have two AI systems working in tandem:

In this arrangement, the first AI system (system A) updates system B and then tests it. If the test is successful, the updates to system B are retained and also applied to system A. This process then repeats for system B, continuing in an endless loop.

To make the process more efficient, there could be multiple systems, continually improving each other in a virtuous cycle:

This example has five systems continually testing and improving each other, but one could have as many systems as required, if you could create the necessary infrastructure.

The Language of Layers

Although this system would initially be configured to continually improve the software and hardware, it could evolve even further. To understand this, you need to know how computers currently function.

Computer systems contain three layers of code:

  • Machine level language – the raw binary code made up of zeroes and ones that instructs the computer in its operation
  • Assembly language – code that uses short words to represent machine level instructions, making it easier for programmers to write machine level code
  • High level languages – programming languages that can be read and understood by programmers, including C, C++, Java and Visual Basic

Computers use operating systems (such as Windows, MacOS and Android) to manage the computer’s resources, and applications such as Word and Excel that run on top of the operating system. Operating systems and applications are written in high level languages, which are ultimately translated into machine level language that the computer can understand.

All code and software runs on hardware, which is the physical parts of the system including the motherboard, CPU, RAM and the various circuits. In addition, the operating system needs to tell the hardware how to communicate with the operating system and applications.

Hardware: the ghost in the machine

Summing up, current computer systems are built upon these layers:

  • machine level language
  • assembly language
  • programming language
  • operating system
  • applications
  • hardware

This is actually a simplified view – there are additional layers within some of these layers, but it’s a good overview. A sufficiently advanced self-improving system could, in theory, discover a way to merge these separate layers into one.

Compressed Computing

Just as companies become more efficient by removing unnecessary layers of management (a process called flattening the pyramid), an advanced computer intelligence could discover how to function as a hyper-advanced single-layer system, where the operating system and applications are intertwined directly with the hardware.

Because this would be a quantum computer, each bit of information could be stored at the smallest imaginable level: a subatomic particle. A basic element such as hydrogen contains billions of such particles in a cubic centimeter, and each particle would be a transistor – a single computing circuit.

The most advanced computer processor available today contains about 40 billion transistors. A quantum system could have trillions of transistors in a compact space containing a strange hybrid of software and hardware – a “quantumware” computer. It would be as if all of IBM’s 346,000 employees were replaced by one super-human.

An atomic grid

The Runaway Intelligence Train

The question then becomes: at what rate would this system’s intelligence increase? Intelligence is a difficult thing to quantify and measure, but let’s conservatively assume that:

  • this system’s intelligence increases by 1% each cycle, starting with a cycle of one full day (24 hours)
  • the time required to become 1% more intelligent decreases by 1% after the first cycle and then continues to decrease by 1% after each cycle

After the first day, the system would be 1% more intelligent, and the time required for it to become 1% more intelligent would then be 99% of one day, about 23 hours and 45 minutes.

Runaway to infinity

After 101 days, something remarkable happens. It would only take 1 second to become 1% more intelligent. Part way into this 101st day, this system would be 998 trillion times more intelligent than when it started. How large is 998 trillion? Counting one number per second, it would take about 32 million years to count to 998 trillion.

This system would be a technological singularity: an intelligent agent running an ever-increasing series of self-improvement cycles, becoming rapidly more intelligent, resulting in a powerful superintelligence that exceeds all of humanity’s intelligence.

Does all this sound like science fiction? In addition to building a quantum computer, Google has already taken the first step by investigating quantum artificial intelligence.

If developed, a self-learning quantum AI system would not be beyond our imagination. It would be beyond what we could imagine.

Final random thoughts

There’s an interesting Twitter feed with insightful observations of art and science such as:

  • AI will create jobs if it succeeds, and destroy jobs if it fails.
  • Illusion is the extension of unconsciousness into the realm of consciousness.
  • Art is the debris from the collision between the soul and the world.

These Tweets weren’t written by a person – they were generated by the artificial intelligence GPT-3 in its Twitter feed: https://twitter.com/ByGpt3

The singularity is approaching – are you ready?

The singularity awaits…

Viral inversion

Welcome to the Third World War, where the enemy is not fascism, communism or any other “ism” but an object just over one 10,000th of a millimeter wide. In one of the greatest ironies of history, the world is being ravaged not by the very large (war, earthquakes, hurricanes, or nuclear weapons) but the unimaginably small.

Coronaviruses derive their name from the spikes that form a crown or “corona” atop their spherical body. These spikes are what makes the virus so lethal because they enable it to latch onto the cells of lungs, causing severe respiratory problems. In addition, many infected people don’t exhibit symptoms right away, if at all. These people continue to move about, unknowingly infecting others. The death rate is 1% to 10% – the ultimate “killer app”.

The war against Corona is unlike any other. In the last World War, half the world was fighting the other half. In this war, the entire world is united against one enemy. All the greatest minds (scientists, epidemiologists, medical specialists, software engineers, researchers and pharmaceutical developers) are working together to develop a treatment and vaccine. U.S scientists were able to decode the DNA of the coronavirus within weeks. Based on this DNA structure, they reversed engineered a potential vaccine by comparing the DNA of the coronavirus against a database of other vaccines and the viruses that these vaccines effectively treat. However, it could be up to 18 months before a viable vaccine is released. Why the delay? Any virus must first be tested on humans to verify its safety and efficacy. Software’s the easy part; the bottleneck is the “wetware” – that is, people.

Sadly, we knew this war was coming. Governments were warned repeatedly that an outbreak of this magnitude was likely, yet did little to prepare. Four years ago, Bill Gates prophesied this event in a Ted Talk entitled, appropriately enough: We’re Not Ready. In this talk, he describes in detail the plague that would occur just a few years later. He references the movie Contagion which also predicted current events. This film dramatizes a virus from Asia with a genetic component from a bat that rapidly spreads throughout the world. There are scenes of hoarding, panic and death, and references to social distancing. Just as a virus can mutate, this film has mutated from a fictional drama into a documentary.

Clearly, there was a delayed reaction from governments the world over. What’s disturbing is that this delay continues. There seems to be a two-week lag behind the actions that governments should be taking and the ones they are taking. That is, they are on a two-week time delay. It’s unfortunate that we cannot create a mini-time machine and force world leaders to travel just a little bit into the future. With our backs to the wall, we must go back to the future.

What the world is watching now with bated breath are the numbers. Specifically, everyone is closely following the number of new infections at the state or provincial and national level. As quickly has these numbers have grown, they will peak and then start to decrease. That is, the growth curve will begin to invert. It is the most important inversion in recent history, but it won’t be the first inversion to occur during this crises.

Many films (such as the previously mentioned Contagion) have dramatized an outbreak of this type, with startling scenes of deserted cities. Today, the entertainment world and the real world have inverted. Scenes previously filmed now occur in real life. Conversely, films and TV shows from the recent past have scenes of crowded streets, people in airplanes, taking cruises, and being close together, all things which used to be in the real world.

Social media has also been inverted. Previously, it had been criticized for being a poor substitute for personal contact; for leading to people being less social. Now, because people are unable to be physically together, social media, especially video chat, has finally begun to live up to its name. Like many others, I converse with friends, family and coworkers using video chat. While not as meaningful as in-person contact, it’s a powerful alternative; a generic drug substitute for human contact.

On the subject of social media: one expression I’ve always abhorred is “going viral”. Current events have taught us that “going viral” is not exactly a good thing. We’d never say that a tweet has “gone murder” or “gone genocide”, but felt comfortable saying it spread like a virus. May this expression die out as quickly as the virus itself.

But perhaps the two most important things that have been inverted are the significant and the trivial. Do you remember the news a few weeks ago? The U.S. presidential race? Global warming? The train blockades in Canada? Harry and Meghan relinquishing their royal titles? How utterly inconsequential these things appear now. It is a tragic law of nature that to refocus the world, calamity is required. The formula appears to be: humanity + calamity = humanity 2.0. The price for this upgrade is a heavy one.

In the end, we must have hope. We must recognize that although we’re powerless over this virus, we’re not powerless over how we act and respond to it. Social distancing is unpleasant, but better to be placed six feet apart horizontally on the ground than six feet vertically into the ground.

In the end, it is our thoughts that will keep us sane, and all thoughts spring from words. We can look again to the last Word War for inspiration. Winston Churchill concluded one of his famous speeches with these words:

We shall not fail or falter; we shall not weaken or tire. Neither the sudden shock of battle, nor the long-drawn trials of vigilance and exertion will wear us down. Give us the tools, and we will finish the job.

Churchill spoke these words in 1941. Nearly 80 years later, we’re using new tools to fight a new enemy: the tools of medical science and technology.

These are the tools; let us finish the job.

Stay safe.

Clarity or Nothing

A distortion is a change in the form of something, usually an object, image or sound. For example, a car can become distorted after an accident. Photographs or videos can become distorted if they are blurred, pixelated, or warped. Sound can be distorted using sound mixers.

Sometimes, the distortion is desirable. To represent our three-dimensional earth as a flat two-dimensional image, the world is distorted using a global map projection. In music, distortions can reduce noise or give the music a fuller sound. Many artworks are distortions of real objects, such as Dali’s melting watches.

 

Note that the distortions described here apply to our two main senses: sight and sound. You don’t often see descriptions of distortions applied to our other three senses: smell, taste and touch. I’ve yet to hear someone say that a rose smells distorted, a cake tastes distorted, or a blanket feels distorted. The closest someone will come to describing these scenarios is that the object in question is “off”.

Distortions apply not only to our senses, but also our minds. Steve Jobs was notorious for his “reality distortion field”. This described his reluctance to accept the facts as they were, and often demand unrealistic deadlines or feature requests for his products. He would use his charismatic personality to cajole his workers to do the impossible. Sometimes it worked, but it pushed his staff to their mental limits.

Jobs was engaging in a type of cognitive distortion called mental filtering. A cognitive distortion is a flaw in someone’s thought processes, a form of twisted thinking. It causes the thinker to perceive reality incorrectly. Mental filtering involves focusing solely on the negative or positive aspects of something, excluding all other relevant information.

Other types of cognitive distortions include over-generalization (jumping to conclusions based on one piece of evidence) and emotional reasoning (believing that something is true simply because it feels true.)

Distortions can wreak havoc in communication. We can get into trouble when rather than speaking directly to someone about a problem, we insert an intermediary between us and the person we want to speak with. If you’ve ever played “broken telephone”, you see how disastrous this can be.

Communication and language are enormously complex. When we speak with someone, it is not just the content of our words that we are transmitting; it is the tone of those words and our body language. Most communication is nonverbal. Observe two or more people on TV with the volume off. You won’t know what they are saying, but you will know what they are feeling and thinking.

Image result for intermediary

It’s so tempting to use an intermediary when we are reluctant to speak directly with another person. The intermediary becomes a middleman or informational broker. A communication breakdown occurs because when either person talks to the intermediary, the intermediary will unconsciously distort in their minds what they have heard. There is then a further distortion when the intermediary communicates to the second person what the intermediary thinks they heard the first person say. On it goes; with every communication transmission loop, the message continues to be distorted.

The solution is to dismiss the intermediary. Always speak directly with the other person; do not engage in a communication distortion field by adding a third person. However, if both parties absolutely insist on using an intermediary, then have all three people in one room at the same time, with the intermediary acting as a negotiator between the two sides. This will not only eliminate the communication distortion (because each side will be able to hear the other); it can actually decrease the distortion because the intermediary, assuming they are fair and objective, can offer a balanced perspective, and, one hopes, bring the two sides together.

In my thirty years in the workplace, I’ve seen that the primary cause of problems is poor or distorted communication. As a business communicator, I continuously strive to reduce this distortion.

Reducing distortion is known as bringing clarity. Your method of communication, whether oral or written, should be like a glass bowl, clearly displaying the contents of your message, without the medium of the message causing it to be distorted.

To sum up: Avoid third parties. Bring clarity. Banish distortions. 

Are we clear?

TiltBowlLarge11InchAVSHS18

Chance Connections

Quantum computing is the latest and strangest development in supercomputers – computers that perform incredibly complex tasks. Science fiction author Arthur C. Clarke mused that “any sufficiently advanced technology is indistinguishable from magic.” Quantum computing is not magic, but dangerously close. It is based on two bizarre principles: superposition and entanglement.

Superposition involves probabilities. Classical computing is based on the binary system of of 0s and 1s. All computer code and electronic devices run on this system; if you go deep enough into the code, all you will see are 0s and 1s (called bits), and nothing in between. A bit therefore is the smallest unit in a computer program.

Quantum computing uses a special type of bit: a qubit. A qubit, like a bit, can have a value of 0 or 1. But it can also have both these values at the same time. This is superposition – the ability of something to be in more than one state simultaneously. We can’t know what state it is in until we observe it; until then, all we can do is assign a probability of it being in a certain state.

Entanglement is an even more bizarre aspect of quantum computing. It refers to the phenomena that if you were to measure a qubit, it changes what you see in another, no matter how far apart the two are. For example, if you see that the value of one qubit is 0, then the value of another entangled qubit billions of kilometres away becomes 1. There appears to be a mystical force connecting the two particles. Einstein called entanglement “spooky action at a distance”.

Quantum computing sounds like science fiction, but it is not. Companies including IBM, Google, Microsoft and Intel have built (or are developing) quantum computers. As with early classical computers from the 1930s and 1940s, quantum computers are beastly machines, with many wires and cables protruding in all directions. Additionally, they must operate at near absolute zero, (the temperature in outer space), about -273°C.

The potential applications of quantum computing are limitless. Because of their quantum nature, they will be billions of times more powerful than the most powerful supercomputers today. They will be able to solve problems or create applications that traditional computers simply cannot, including:

  • artificial intelligence & machine learning – systems that can think, reason, and make rational judgments and recommendations, including medical diagnoses, farming and energy efficiency
  • molecular modeling in chemistry and physics
  • cryptography – creating unbreakable online security
  • financial applications including investments,stock market and economic analyses
  • weather forecasting

To recap, qubits (the building blocks of quantum computing):

  • exist in many states simultaneously (superposition)
  • are mysteriously connected together (entanglement)

Because quantum computing is attempting to discover the underlying principles of reality, it follows that these two principles should reflect reality, that is, existence should also be based on the fact that things:

  • exist in many states simultaneously
  • are mysteriously connected together (even when far apart)

At first glance, this seems absurd. Our everyday experience tells us that things exist in one state, and that if you change something, it’s not going to change something else, especially if it is far away.

But if we look closer, we can see that these are the same principles upon which the greatest and most pervasive technological innovation is based. It’s the technology that has changed the world more rapidly than almost anything else. It’s the technology that has toppled governments and powerful leaders, established friendships, solved mysteries while creating new ones and caused untold heartbreak, joy, sadness and everything in between. It’s the technology that you are using right now: the Internet. While the Internet does not represent all reality, it has come to represent and directly influence a large portion of it. It has become, quite literally, the “new reality”.

Related image

On the Internet, the same website appears differently for each user, depending on the device they are using. On certain sites, different information appears. For example, travel sites will present different prices depending on a user’s location, computer, previous queries and so on. This is superposition: the ability of the same thing to exist in different states.

Online, we are all connected, regardless of distance. When you do anything online (make a purchase, send a message, check your banking transactions, and so on), it makes no difference where you perform this action. Cyberbullying is based on the premise that sending a hurtful email or text has the same effect whether the sender is 5 metres from the receiver or 5,000 km. An action in one area affects another area – there are are no distances online.

It’s therefore no surprise that IBM has developed an online quantum computer. That is, a computer that is based on the principles of superposition and entanglement now exists on a platform that is based on superposition and entanglement.

The answer to the age-old question what is reality appears close at hand: probabilities and connections. The question now is what will happen after we’ve built computers that are millions of times more intelligent than us?

Will it lead to a utopia where all of the world’s problems are solved by benevolent machines? Or will we end up in an Orwellian nightmare, where heartless machines enslave humanity, or, worst still, we use machines to enslave others?

Place your bets, ladies and gentlemen. Place your bets…

a

Life, The Algorithm

1

In a most remarkable product demonstration, Google unveiled their improved artificial intelligence (AI) application, Google Assistant. In the demo, the application phones up a hairdresser and, using uncannily natural-sounding speech, peppered with “uhms”, is able to book an appointment by conversing with the hairdresser. In doing so, Google Assistant appears to pass the Turing Test, developed by the British mathematician Alan Turing in 1950. This test postulates that if a person can’t tell whether they are communicating with a human or a machine, then the machine has passed the test and therefore “thinks”.

In the demo, it is a machine that (or perhaps who?) is calling the business to book the appointment, and the individual answering the phone is human. However, this could easily be reversed, so that it is a person who is calling the business, and the machine answering for the business.

This raises an interesting question: what if it there was a machine at both ends of the conversation, that is, one Google Assistant calling another? If the AI engine running both assistants is advanced enough, they could, in theory, carry on a meaningful conversation. Although this might seem like the ultimate AI prize, there’s a much simpler solution: using a website to book an appointment. Granted, it doesn’t have all the nuances of a regular conversation, but if the goal is simply to book an appointment, then the user’s computer simply has to connect with the business’s.

Image result for industrial revolutionThis use of advanced AI is part of a larger phenomena: the degree to which our daily tasks have been automated or performed by others. Up to a mere 200 years ago, people made and repaired what they needed, including clothes, tools, furniture, and machinery, and often grew their own food. The industrial and agricultural revolutions changed all that. Goods could be mass-manufactured more efficiently and at a lower cost. Food could be grown on a mass scale. We’ve moved away from a society in which individuals made their possessions to one in which we let others do this for us.

As recently as the 1960s, many people maintained and fixed their cars; most people today leave this to a mechanic. We have outsourced nearly everything. Although we have gained much in quality, price and selection, in the process, we have lost many practical skills.

This trend continues as more and more processes are automated or simplified. Coffee makers that use pre-packaged pods are easier to use than regular coffee makers. However, it would be a sad thing if entire generation did not know how to brew coffee the regular way. Even brewing coffee “the regular way” still involves using a machine that others have made and that we cannot fix, powered by electricity that we do not generate, using beans that we can neither grow or process ourselves, and water that is automatically pumped into our home using an infrastructure that we cannot maintain. The parts that make up the parts that make up still larger parts are designed and built by others.

At its heart, Google Assistant uses algorithms, sets of sequential rules or instructions that solve a problem. A simple example is converting Celsius to Fahrenheit: multiply by 9, divide by 5, and then add 32. The algorithms used by software applications are, of course, millions of times more complex than this example, because they use millions of lines of code.

See the source imageAlgorithms are incredibly omnipresent. They are used extensively by online retailers (such as Amazon) to recommend purchases for us based on our previous purchases and browsing habits. Facebook uses them to track our activity and then sell that data to others, often with dire results. Algorithms are also used in two of the most important decisions a person can make: whom they love (in dating applications) and where they work (in résumé and interview screening applications).

Algorithms have even used to determine how likely a criminal defendant is to re-offend based on attributes such as race, gender, age, neigbourhood and past criminal record. But is it ethical for a judge to use an algorithm to determine the length of a sentence? This happened in the case of Eric Loomis, who received a six year prison sentence in part due to a report the judge received based on a software algorithm.

Society is facing the same trade-off that it faced 200 years ago as it moved from personal to mass manufacturing: convenience and comfort versus knowledge and independence. As we relinquish more and more power to machines and let algorithms make more of our decisions, we achieve more comfort but less freedom. We are, bit by (computer) bit, quietly choosing to live in a massive hotel. It’s pleasant, you don’t have to do much, but it does not prepare us for life.

For in life, there is often sadness, pain and hardship. There is no algorithm that tells us how to deal with these things, nor will there ever be.

Related image

In our image

See the source imageIs a ship which has had all its parts replaced over many years still the same ship? This question is explored in Theseus’s paradox which asks whether something remains the same even if all of its components have changed. Other examples include an axe that’s had several handles and blades and a broom that’s had several heads and handles.

Moving from things to people:

  • The rock groups Yes, Heart and Blood, Sweat & Tears do not have any of their original band members – are they the same band?
  • Canada’s landscape and population have vastly changed its founding in 1867; is it the same country as it was back then?

It all depends on how you define “the same”. If you mean “something containing all of the original components”, then these things are not the same. However, if you mean “with the same general identity or name”, then these things are the same. The paradox is that both these things can be true. Canada as an idea never changes; Canada as a thing always changes.

With human beings, the question becomes even murkier. Most of the cells in the human body are replaced every 7 to 15 years. Is someone the same person they were 15 years ago? The answer may be found in our technology.

Image result for computer memoryLike human memory, computer memory is also ethereal. It is stored as a complex set of magnetic charges, which in turn represent the binary code that drives the system. The entire system is dynamic. Magnetic charges are continually moved around so that each time you use the device, the layout and order of the memory changes. However, from the user’s perspective, it is still the same device, and nothing has changed. That is, the whole is greater than the sum of its parts, because the whole is constant regards of where and what those parts are. Therefore, even though from a material perspective the device has changed, from a perceptual perspective it has not. Perception overrides materialism.

The same is true in people. We don’t define ourselves solely as physical beings but also as spiritual ones, with a soul we are born with that never changes. Even though physically we’re not same as we were years ago, spiritually and emotionally, we know we are the same. It is this knowledge that keeps us sane. People who perceive their soul (or personality) as changing are often diagnosed with Multiple Personality Disorder. It is as though the hard drive in their brain is being regularly replaced with another.

It is no coincidence that the essence of our existence is also in our technology. Those of faith believe God created mankind in his own image. Mankind, in turn, inspired by this, has created machines in his. Perhaps this is why the the entire contents of a hard drive, DVD, or CD is called a disk image.

Related image

 

A Portable Life

“Computer” did not always mean a thing that computes; as recently as the 1960s, it actually meant a person. The US military and NASA employed human computers to perform complex mathematical calculations. As electronic computers evolved, they replaced human computers, and replaced the definition of a computer.

Image result for ENIAC

The early electronic computers were enormous. ENIAC, (pictured right) one of the earliest all-purpose computers built in the 1940s, was 1,800 square feet and weighed nearly 30 tons. (Not exactly a laptop.) It took an army of people just to keep it running.

Later computers (such as mainframes) in the 1960s also required many individuals to operate. Starting in the 1980s, the personal computer took off. Today, most people own several computers in various forms. We have therefore evolved from:

  • many people for one computer
  • one person for one computer
  • many computers for one person

The primary computer types today are desktops, laptops, tablets and smartphones. All of these are “personal” computers, because the owner is highly connected on a personal level to each device, as though it was a physical extension of that person.

If you think I’m exaggerating, watch the look on a young person’s face if they have misplaced or lost their smartphone; it’s not quite an amputation, but pretty close. So much of a person’s life can be on a computer it quite literally becomes a part of them.

We can categorize computers as:

  • Non-portable: desktops
  • Highly portable: smartphones
  • Semi-portable: laptops & tablets

Given how personal “personal computers” are, it’s not a huge leap to correlate the type of computer to the type of person: non-portable, highly portable and semi-portable.

The Non-Portables

Non-portable people are the stable, steady stalwarts of society. They have established homes, travel little if at all, and are consistent, reliable, dependable and trustworthy. They may not always be creative, but are able to work with creative people to get the job done. They are conservative, resistant to change and comfortable in their routines. They may be perceived as cold and uncaring, but deep down can have big hearts. They just don’t wear their heart on their sleeve, but keep it safely tucked away, just in case. Their motto is: “If it ain’t broke, why even think about fixing it?”

The Highly Portable

Highly portable people are the dreamers and drifters. They move frequently, rent but never own, love to travel, and frequently change careers. At their worst, they may be unstable and flighty, but are also very friendly, outgoing and full of new and original ideas. They are always challenging the status quo, and in doing so, get the world of its comfort zone and move it forward. Their motto is: “Everything needs fixing.”

The Semi-Portables

Semi-portable people reside between these two extremes and are therefore more difficult to define. They can be very open and creative, and at other times closed and subdued. They excel as mediators and diplomats, bringing the other two types together and bridging the gap between them. They are the middle ground, the average, the in-between. Their motto is: “Let’s look together to see if it needs fixing.”

With AI (artificial intelligence) now developing at an astonishing rate, we are approaching the age where computers will be able to think and reason as people do. In what will be one of the greatest ironies of technological history, computers may again become persons. When that happens, your smartphone will indeed be “a portable life”.

How to lighten your backpack

backpackIf there’s one thing I hate, it’s carrying around a lot of stuff. I remember in college in the 80s having to carry my binders and textbooks around in a backpack – what a pain in the neck, and back.

Computers and the Internet have liberated us from having to carry around so much stuff. You can buy e-books (which are a bit lighter than regular books) and write notes in a document rather than on paper. But these advances have led to another problem: co-ordinating all your information and documents over several devices and locations.

Being a technical writer, I love to document everything, so I have hundreds of documents for all my personal needs. I need to be able to access this information:

  • in many locations – home, school, and work and
  • across multiple devices: my laptop, tablet, and phone

In addition, I want to be able to easily recover from what I call “The Terminator Scenario”. This occurs when your computer and backup drive both die or are stolen. I’ve known many people who lost everything when their hard drive died, because they forgot it was a moving part, and all moving parts eventually fail. It is only after this traumatic experience that they learn to back up their files.

But backing up to an external hard drive, while a good practice, does not protect you if someone breaks into your home and steals both your computer and hard drive, or if both are consumed in a fire, flood, tornado or some other natural disaster. It’s critical, therefore that all your files also be copied to the cloud.

I’ve found using Google Drive with Dropbox is an excellent solution. I use Google drive (formerly called Google Docs) to store all my documents and spreadsheets that don’t contain any critical private information. Although Google Docs is mainly used to store Google documents, you can store any type of document on it, making it a handy backup tool.

Google Drive gives you 15GB of storage for free; you can upgrade to 100GB for $2 U.S. per month, the best deal I’ve seen for online storage. You can also install a Google Drive application on your desktop that stores and synchronizes copies of the files on your hard drive, providing yet another form of backup.

I use Dropbox to store most other types of files. It gives you 2.5GB for free; you can upgrade to a whopping 1 terabyte (that’s 1,000 gigabytes) for $119 CDN per year, so it’s more expensive than Google Drive, but better integrates with your current file set. It’s great for storing any type of file and allows you to easily upload and download files to and from your desktop.

I use Google Chrome because a) it’s a great browser and b) once I log in, all my settings, bookmarks and Chrome applications are automatically loaded. This is especially handy when I’m logging in to different devices, including those that are not my own, such at a friend’s or at hotel.

Wherever I am, I use Google drive to make notes. I can then review and update them at home because they are the same files. I also use Dropbox to upload and download the project files. Again, whatever I work on at school, I copy to my Dropbox folder and it’s automagically there when I get home.

Finally, I use Google calendar to remember appointments, Google Maps to not get lost, and, of course, Google Mail (Gmail) to access my email anywhere. These apps you’ve probably heard of, but did you know you can use Google Keep to store simple lists?

As you can tell, I’m a bit of a Google nut. I think if I ever have another kid, I’ll name him or her Google.

(Maybe that can just be their middle name….)

reCAPTCHA’d!

Related image

reCAPTCHA is an excellent example of not only solving an informational processing problem in a creative way, but in solving the original problem, also solving a much larger one.

Before you can understand reCAPTCHA, you must first understand its predecessor: CAPTCHA. CAPTCHA was created to solve the problem of automated programs (or “bots”) from logging into websites and thereby generating spam in the form of emails and mass postings.

A CAPTCHA screen displays a distorted image of letters or words. A person can read the letters, but a bot cannot. The user must enter the letters correctly to gain access to the system, for example, to sign up for an email account.

This technology alone is a great example of a creative solution to a complex problem. But reCAPTCHA takes it a step further by solving an even bigger problem.

This larger problem involves an ancient form of communication – the printed page. There are tens of thousands of books and newspapers that Google is trying to convert to digital text. Scanning the publications, then using OCR (optical character recognition) to convert the scanned image to text has its limits. If the text is distorted (as it is in many of the older publications), it cannot convert the text.

How does this relate to CAPTCHA? Well, about 200 million CAPTCHAs are done by people every day. If each CAPTCHA takes ten seconds, this effort represents about 63 person years of work every day.

Wouldn’t it be amazing if there was a way to put all this time to good use? That is exactly what reCAPTCHA does.

Here’s how it reCAPTCHA works:

  1. When a document is scanned, it detects a word that it cannot convert. Let’s call this the “unknown word”.
  2. The reCAPTCHA process sends this unknown word as a CAPTCHA for people to deciphere.
  3. The CAPTCHA contains not only the “unknown word”, but another word which the system already knows. We’ll call this the “known word”.
  4. In the CATPCHA that is created, the user is asked to read both words and enter them.
  5. If the user solves the known word, the system assumes that their answer will be correct for the unknown word.
  6. The system also gives the unknown word to a few other people to verify that the original answer was correct.
  7. If enough people agree on what the unknown word is, the information is set back to the original system and the converted word is added to the document that is being digitized.
  8. This process is repeated until all the words in the document are converted.

recaptchaCan you even begin to imagine the flash of genius that occurred in the mind of the Luis von Ahn, the creator of the reCAPTCHA process?

The problem is that these type of “eureka” moments are very difficult to create. They often just happen, much like the weather. You can no more force yourself to be creative that you can force yourself to love, hate, forget something, fall asleep or go back in time.

However, you can sometimes find creative solutions if you just stop what you’re doing, and ask yourself some questions, such as:

  1. Is there a better way to present this information to the end user?
  2. What else would a user need to know about this concept, task, or thing?
  3. How does the user use our documents?
  4. What changes could be made to enhance the documentation development process?

I’ll give some examples of real-life creative solutions that I’ve encountered:

Example 1: Our help files have to be checked into a version control system. Each help project can contain hundreds of individual files, and these files are often created, deleted, moved and renamed. It would have been very cumbersome to keep track of each file that was checked in and out. The solution (from a colleague of mine) was this: instead of checking in and out the various files, a zip file of the entire help system was created and checked in instead. The installation program then decompresses this zip file. Only one file now needs to be sent and tracked in the build.

Example 2: I was working with a developer on a complex database administration application. One of the functions the user could do was rerun a query by clicking a button labeled, appropriately enough, Rerun query. The developer said the problem was that there were many different queries that the user could run, and that they needed a quick way to know which one they had run before re-running it. I asked if was possible to embed the name of the query that had just run into the button name, so that, for example, if the user had run the Last Name query, the button label would be Rerun Last Name query? I still remember the developer’s eyes widening and his face lighting up as recognized the elegant beauty of this solution. “Yes,” he said, “it can be done!” 

Example 3: Many of our help projects share content, templates, and other settings. I wanted to develop a simple content management system that would allow all the writers to share these things across many locations. I created a master help project that contained all the common content and settings. I then linked my other help projects to this master project, so that if any of the common material changed, it would automatically be updated in the other help projects. Finally, I stored all the documentation on a version control system that could be accessed by any writer. As long as each writer has the current version of the master help project and links their other help projects to it, this will ensure the templates and content remained standard.

So don’t just think “outside the box”.

Ask yourself if you even need the box in the first place.