Marketing Bots and Driverless Cars

It seemed like such a good idea. Program a marketing bot to automatically send out promotional messages whenever national observances such as holidays were on the calendar. But it wasn’t. And what has this to do with driverless cars?

According to the BBC, the KFC marketing message read (in German): "It's memorial day for Kristallnacht! Treat yourself with more tender cheese on your crispy chicken. Now at KFCheese!".

Kristallnacht (9 November 1938) is, for those unfamiliar with modern European history, the night when the Nazis attacked Jewish homes and businesses in Germany. It is widely used to remember a bloodthirsty time which ended with the murder of over six million Jews. (If you live in the United States, just imagine KFC had made a similar suggestion for commemorating 9/11).


The marketing bot had apparently been programmed to send marketing messages for national events without anyone realizing that some events are perhaps not best commemorated with a crispy chicken.

I don't know how much sophistication there was to this bot. Did it just cycle through a list of message templates? Did it come up with suitable suggestions based on the time of year and current promotions? Or did it employ some deep-learning algorithm that had found an obscure association between cheese and Neo-Nazis?

KFC apologized for the bot's message about an hour later. No great harm was done, as far as I can tell. People complained. KFC looked foolish. But nobody was inspired to major violence as a result of a perceived insult.

It's hard to think of everything when automating something. Most of the time it doesn't matter too much if some mistakes are made. Defining the requirements rigorously can require significant effort, which is why some development methodologies simply try to ignore the detailed requirements and iterate until an acceptable solution is achieved.

For safety-critical systems requirements definition is particularly hard. Nobody wants a nuclear reactor or an aircraft to act in an unanticipated way due to unforeseen inputs, which is why safety-critical systems are kept as simple as possible, go through multiple review processes, and are very expensive as a result. Even then, things can go wrong with fatal results.

Presumably nobody thought this bot was important enough to warrant much attention.

Surely nobody would design a safety-critical system with the same lack of attention to detail as a marketing bot, would they?

So the next time you encounter a policeman directing traffic by gesticulating at you wildly and using a unique combination of hand signals and facial expressions, ask yourself what your future driverless car would make of the same situation. That's got to have been a thoroughly tested and well thought out part of the requirements, hasn't it?

† If you think defining requirements is easy, consider these lists of bad real-world assumptions that have tripped up programmers handling names and times.

15 November 2022

To get notified when new articles appear, subscribe to the Risky Thinking Newsletter. It's low volume: we don't send out an issue unless there is something interesting to say. You can also subscribe to our RSS Feed

Recently published articles can also be found here.

Agree or disagree? I'd like to hear your thoughts. Please initially use the contact form to get in touch.