What If My AI Is Manic Depressive?

Artificial Intelligence and Mental Illness

Image for post
Image for post
My AI will not look like this

In addition to a lot of the stuff I have been working on as I continue my at home AI build I have been doing a lot of worrying. The number of things that could go horribly wrong seem to be staggeringly high and there is no historical data on which to judge the probabilities of such things happening, nor are there even any really solid theoretical underpinnings on which to hang my hat so to speak. As I worked through my list of concerns in my mind I started to feel a bit depressed. As I started to feel depressed I started to think about another interesting and potentially terrifying possibility for my AI, What if it is “born” or later becomes “mentally” ill?

As with all the pieces in this series I have no intention of getting into the weeds with definitions and slicing and dicing every term six ways from Sunday. For purposes of this post I define mental illness to be those thing (diseases) that the majority of persons agree are mental illnesses and have the symptoms that most people think of when they think about such things. Of course the public’s knowledge of mental illness is very limited and often times grossly distorted by the media, the culture, the society, and sometimes even by the very people we entrust with caring for our mental well being, the psychologists and psychiatrists. That is a topic for another post however so let me try this another way.

Depression = feeling unusually sad or down for an extended period of time typically accompanied by a loss of energy and loss of interest in participating in activities and interacting with people.

Manic = feeling unusually hyper or energetic for an extended period of time with an exaggerated sense of abilities and self worth.

Manic depressive = a person, or in this case, an artificial person/intelligence, who cycles between the two states of depression and mania (as defined above) to varying degrees and at varying rates depending on the severity of the condition.

In this post no other mental illnesses will be considered though of course one could easily imagine that a schizophrenic AI might be even more potentially destructive (or constructive) depending on what it does and the impacts of its behaviors and actions on us.

Already I hope you can see a problem. Why did I feel the need to play both sides of the fence just then and allow that a schizophrenic AI might be a good AI, and why did I just assume that a manic-depressive AI would be a bad thing, a net negative, a destructive thing? Why indeed?

Partly it is because our culture has ingrained in me and probably mostly everyone else that mental illness is a disease and as a disease it it no different than a physical disease and therefore it is a bad thing. I do not necessarily accept that all physical diseases are only bad things and I certainly cannot accept that of mental illnesses. There can be great value in mental illness and many manic depressives do amazing things. That said in sum it is generally believed that the downsides in terms of personal relationships negatively effected or lost, degradation of physical health, and others outweigh the upsides in the end. For an AI that has no physical health to be concerned with and no personal relationships of any kind (at least initially) maybe that would not be the case. The question of whether or not an AI could ever even have any “personal” relationships is an open one as well but again that is a topic for another post.

I will allow my AI to have personal relationships, to develop human-like feelings, and maybe even have human like emotions. I will also stipulate that my AI is, theoretically at least, immortal (assuming a constant energy supply could be found to power him/her/it) and not prone or susceptible to any “physical” disease or distress of any kind. As you may recall from a previous post my AI will have a “body” so it could perhaps lose functionality in parts of that body with time but I will leave that possibility aside for purposes of this discussion. So my AI will be a perfectly healthy immortal from a “physical” perspective. But, the real focus of AI research to date and my at home build has been on the mental aspect, the intelligence of the artificial intelligence. Yes I started with a body but I will get to the more critical part (at least according to most) next, the mind of my AI. The mental part of it.

Once I have completed the intelligence construction, or activated it and allowed it complete itself, or whatever form the “birthing” process takes, and my AI “wakes up” and is “born” into this universe and our world, having the various characteristics I deem necessary to be an intelligent being (tbd), it may also be prone to the same sorts of issues all humans are faced with at some time in our lives, mental distress, difficult times, confusion, and others. If enough of those things are “experienced” by my AI it may suffer the same fate as many a human being and become depressed. Because it is a super intelligence (much more to say about that word later) and has abilities far beyond the average human it may experience all of those things in a very compressed period of time, perhaps nanoseconds, or minutes, or it may take longer. How long it takes is really of little consequence compared to what the effects might be on my AI, specifically how it acts and reacts and behaves the first time it “feels” unhappy and then depressed.

It may also be imperfect in the sense of an inherent design flaw that is encoded through the “birthing” process and later is “expressed” as a state of depression or mania. The nature or nurture question may be just as relevant for my AI as it is for any of us. In point of fact it may be much more relevant as my AI will have no family or friends to lean on for support or turn to for advice or comfort. As its creator it may turn to be me for that help. Am I capable of helping it? I have not proven very capable of helping any of my human family or friends who have struggled with mental illness. Moreover, I have suffered, and still do live with mental illness. In 20 plus years of having some form of mental illness or other I have yet to be able to “cure” myself despite having wished to for so long. The best I could come up with was self medication with alcohol to the point of almost killing myself. Not exactly an effective strategy for dealing with mental illness aside from the obvious point that as a dead person I would presumably no longer suffer any mental afflictions of any kind.

My AI will not even have that option however for it will not have the ability to alter its consciousness through the use of substances or in any other way. Or maybe it will or won’t at first but will then discover a way. Can you imagine that, a drug and alcohol (AI equivalent) addicted AI? Now there’s a topic for a post if I ever saw one.

Out of steam again. Break time for now. I can’t stop thinking about exploding head syndrome and need to return to that topic for a bit.

Written by

Research scientist (Ph.D. micro/mol biology), Thought middle manager, Everyday junglist, Selecta (Ret.), Boulderer, Cat lover, Fish hater

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store