Imagine this scenario: a runaway train is hurtling towards five people tied down to a railway track by a madman. You can pull a switch to divert the train onto another track where another person is tied down, killing the one person, but saving five.
Should you throw the switch? Most people will say yes, reasoning that it is better for one innocent person to die than five.
Now imagine this scenario: The same train approaches the five trapped people. This time, you are standing next to a fat man on a bridge over the tracks. If you push the man onto the tracks, his massive weight will stop the train but will kill him. However, the other five people will again be saved. So, do you give him a push?
Most people will say no, because even though the end result is the same (the man will be killed), the killing will be direct and deliberate. In the first scenario, killing the man is an indirect effect of pulling the switch.
You can vary this scenario endlessly. What if the fat man was also the mad man? What if the one person tied up was your spouse, parent or sibling? What if the five men were all convicted murderers who had escaped?
All of these scenarios are part of the trolley problem. There is an entire science devote to ethical and psychological issues called experimental philosophy, or X-Phi and even Trolleyology. Experimental philosophy uses data gathered through surveys of hypothetical moral scenarios. It’s a scientific attempt to draw conclusions on how people reach moral decisions. In doing so, X-Phi is a strange brew of science, psychology and philosophy.
X-Phi is not just a serious of extended thought experiments: it has very practical applications. For example, the recent financial meltdown was caused by many people making various choices and judgments. These people included not only the lenders and administrators of high risk loans, but the recipients of these loans, and the various government agencies involved in regulating the loans. All these people made what in hindsight are appallingly bad choices. However, at the time, these choices may have seemed quite reasonable. By studying the factors that lead people to make bad choices, the hope is they won’t make them again.
Technical communicators are often faced with ethical dilemmas as a result of conflicting needs and wants. These include:
- the need to give the user all the information they require and want and
- the need to withhold information that is not required, and which could overwhelm or confuse the user
- the desire to work in harmony with the people who are developing the product being documented and
- the need to be an end-user advocate which can involve suggesting product changes that may take considerable effort to implement
- the desire to work well with other writers on a team and
- the need to ensure the documentation is of the highest quality through peer reviews, which can involve constructive criticism of another writer’s work, or of your own
These conflicting needs can arise in a variety of ethical dilemmas that a technical communicator could face. Here are few examples to ponder:
You’re having trouble getting the required information about a certain feature from one of the subject matter experts (SMEs). The deadline for the release is fast approaching, and the SME is nowhere to be found. You approach the desk of the SME and see a folder on top with the title of the exact feature you need information about. However, the folder also has a large note, stating: DO NOT REMOVE. Do you borrow the folder, with the intention of giving it back as soon as you’re done with it, but knowing that the SME will notice the missing folder?
Now imagine the same scenario, but this time, you know the SME is away, and will therefore not notice the missing folder. Does this affect your decision? What if it was the same folder, but the SME accidentally left it on your desk? The SME won’t know you have the folder, but the folder still has the same note saying that the folder was not to be removed from their desk.
Another scenario: You have co-authored a user guide with another writer. Each of you have done exactly half the work. However, your manager believes you are the sole author, and praises you for writing such a fantastic document. If the other writer will never learn about this conversation, do you tell your manager you are not the sole writer? What if you knew your manager was deciding which one of you to promote? Would you then tell your manager the truth, again, assuming that the manager would never find out that you were not the sole writer? What if you had done 60% of the work? 75%? 80%? What if the other writer had previously left the company?
The point of these mental exercises is to pinpoint the exact conditions under which we believe a choice becomes immoral. By isolating the factors that influence our choices, we can learn much about the way we think and behave.
And no – I have no intention of stating what I would do in these various situations. In the book of life, the answers aren’t in the back section.