You Look Like A Thing And I Love You

How Artificial Intelligence Works and Why It’s Making the World a Weirder Place

The perfect book for those looking for an introduction to Artificial Intelligence (AI) and Machine Learning (ML). Author Janelle Shane explains how AI works, when it works well, and when it doesn’t work so well bringing about unexpected and surprising results. This book contains lots of whimsical and obscure examples, including the one time Janelle programmed an AI to create pick up lines using examples from the web. This ultimately led to the intriguing title of this book: “You Look Like A Thing And I Love You.”

So, what is AI? When Janelle speaks of AI in this book, she is referring to AI as a machine learning algorithm (NOT the science fiction AIs you may be used to from movies.) In ML, a machine is given training data and a goal that it has to achieve, which it does by creating its own rules. Based on the results the machine gets, it will fine-tune the algorithm to make new results as it creates new rules and does further fine-tuning. As an AI continuously runs algorithms, it is always learning something new and improving upon past experiences. In fact, there are many different ways an AI can learn by using different types of ML.

Artificial Neural Networks (ANNs) are made from chunks of software called cells or neurons that are meant to imitate the human brain while Markov Chains look at past data to predict what’s most likely to happen to next (think of the predictive-text function when messaging on a smartphone.) Generative Adversarial Networks (GANs) consists of two algorithms: one that generates the outcome while the other judges the outcome (similar to how a human might have to evaluate the outcome of an algorithm.) GANs are ideal for image generation because they have an image to imitate, making it easier to judge the outcome. Then there are Evolutionary Algorithms, a sort of survival of the fittest, in which the best algorithm survives. Algorithms can evolve through mutation, a crossover between algorithms or setting hyper-parameters.

While there are more types of ML beyond the ones mentioned above, most AIs are actually a combination of ML algorithms rather than just one. This makes an AI more equipped to accomplish a goal, and to accomplish it successfully.

Just because humans have come this far in the creation of AI, however, doesn’t mean that everything is working smoothly. While AIs can be successful at accomplishing a goal, it may not be what was asked of them because they tend to run into a lot of problems along the way.

Some of these issues include being given too broad of a problem to solve, not having enough data or being given messy data that confuses them. AIs may be trained for a simpler task than what they ultimately need to do, or their training simulation may not reflect the real world. AIs may also unintentionally use private information or accidentally be influenced by humans, as they do not have context within the real world.

Why do these problems arise? Because sometimes humans just don’t prepare AIs in the right way.

One of the major reasons that AIs fail is because of “over-fitting”; when humans prepare AI for conditions that are not in the real word. Beyond that, humans need to prepare AIs for plain old common sense.

While humans do currently try to teach AIs between right and wrong outcomes with reward functions, AIs ultimately ending up trying to attain these rewards in unexpected ways. In some cases, when asked to move from Point A to Point B, AIs have decided to fall instead of walk, and an AI with wheels decided to spin in circles instead of just driving forward. Sometimes, when AIs don’t see any successful outcome, they decide that it’s best to just do nothing! For AIs learning in a simulation, they can exploit their environments flaws to solve problems in non-useful ways, including completely shutting down a game instead of trying to win (or lose). This is also at the fault of humans that forget to tell AIs the rules of what they can and cannot do.

This is why humans need to avoid faulty reward functions, because we don’t get the outcomes we expect. AI is not as smart as we think, and therefore, we have to be careful when we decide to use it and how we instruct it.

One of the biggest, and most concerning, problems that humans need to prepare AIs for is ethics. AIs tend to copy humans, including human bias, which can lead to all sorts of problems. When asked to recommend prisoners who could be paroled, an AI identified that the majority of black prisoners where indicated to be too high-risk and should not be recommended for parole. Why? Because the AI was trained on years old data of the racially biased U.S. Justice System.

So what can humans do to prevent AIs from developing bias? Humans need to check their work. This include diversifying the tech force so that someone is more likely to detect bias in data, testing for bias in algorithms, training algorithms to explain how their got to their results and most importantly, just editing data to not include bias.

These issues are important and need to be addressed because AI already exists everywhere. AI is everywhere from personalization, content writing and automation to gaming and self-driving cars. Many companies use automated algorithms to screen resumes before they are even seen by humans. Because the AI uses past data for screenings, this automatically filters out a lot of women for jobs in higher positions.

Beyond the behind-the-scenes work that AIs do, they are also in direct communication with humans. AIs are so present in our world that most people don’t even know when they are communicating with one, because in many cases people are interacting with pseudo-AI or hybrid AI, situations when humans step in to help AI. So how can you tell if you are communicating with an AI? Well, AI can only handle narrow problems, have out comes that reflect it’s training data, can’t do tasks that require long-term memory, and can’t filter out bias. So if you are using an online help chat and the person on the other side is not making sense, it is likely that you are chatting with an AI.

With scenarios like this already happening in real life, many fear a world where AI will take over, but that is far from coming.

While AIs work like a human brain, there are still a lot of problems. First AIs tend to forget a lot and can’t remember something they have previously learned when they learn something new. AIs amplify bias and seem to miss out on the obvious because they focus on individual features instead of the big picture. AI is also susceptible to what is called an adversarial attack, when they are fooled by someone or something manipulating a small piece of data. Clearly AIs are not fit to take over the world.

Humans have tried to get a better understanding of how AIs work through AI dreaming – putting the AI in a simulated world and seeing what stands out or is important to them, but there is still far to go. And even further before AI can really think like a human.

Nonetheless, in the meantime, humans and AI can and should work together. Moving forward, humans needs to build AI and create datasets to make sure they are solving the right problem. AIs will always need humans for maintenance in case data changes or they need to be checked for bias. Additionally, AIs can always aid humans in every day work with speed, consistency, fairness and even creativity.

To learn more about AI and all that it can do, please check out “You Look Like A Thing And I Love You.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s