1. Plays: 49

    Damien Patrick Williams thoughtfully sent me a piece he was working on, and liked my response enough that he has encouraged me to share it with you, lovely internet.

    I fully recommend reading his work: If We Don’t Know What Consciousness Is Or Means, Why Are We So Scared Of It?

    Stealing Damien’s description of it: “This is about 1700 words on Elon Musk and Stephen Hawking’s comments this year about ‘artificial intelligence.’ They are… Well they’re really technophobic about it, honestly, which is just damn weird.”

    This remainder of this post is a slightly tidied up version of my original response to him. (The audio and text content are the same.)

    —–

    Dear excellent Damien,

    I largely agree with you.

    I don’t think that machine consciousness is that great of a threat to us. To me, a greater concern is how we would treat emerging forms of consciousness, were that to happen. I very much agree that there is a lot of philosophical work to be done and that it is complicated but absolutely worth doing, and that the greater the variety of viewpoints the better.

    Even the language we use needs to be considered. As you say, the labels here are already problematic.
    Intelligence is not a well defined thing. It’s one of those concepts where it might seem like we understand it—certainly it is a word we are able to use in conversation and more or less understand each other—but if you sit down and try to properly define it, it falls apart.
    There is a desire for intelligence to be something easy to track and measure, which can lead to a love for IQ tests, but they’re actually crap. Despite an intention to create some kind of test removed from cultural context, they continue to have huge bias problems and simply fail in that goal. I don’t think it even makes sense to think of intelligence as something that can be compartmentalised that way. There is also a lack of recognition of the full variety of ways we might process information as humans.
    I just don’t think standardised testing is the answer to understanding something as complex and nuanced as intelligence. I also don’t understand why we need a ranking system. Why do we need to line everyone up? In order to do that, we have to reduce the parameters we consider to so few that scores can be organised in this fashion. (In the case of a standard IQ test, for example, this means reducing everything to just one dimension. That’s quite the oversimplification of something where there isn’t even consensus on the definition.)

    Consciousness is similarly fraught, but I think you’re right to focus on it.
    I also agree that if machine consciousness happens, it won’t be very similar to how humans think or are conscious. But if we could make it conscious exactly like humans—whatever that even means—that wouldn’t actually be very useful. Certainly there’s nothing particularly safe about humanlike consciousness; we humans do a lot of shitty things to fellow humans. So it’s probably good that machine consciousness would be different. I suppose that’s the source of the fear. People look at how horrible we are—as a group of organisms when we have power—and they fear that being turned around and done to them.

    The idea of a non-human consciousness does not need to be scary. In fact, we already have a lot of experience with non-human consciousness (which is another thing you already mentioned). Human arrogance really gets in our way and the anthropocentric viewpoint is very limiting. It is really common to live with non-human animals. I have 2 cats. We get along really well despite our different approaches to things. I think studying non-mammals—the octopus, for instance—could lead to some deeply fascinating changes in our understanding of cognition, provided we can let go of human arrogance enough to get anywhere. (I should clarify here that such work is indeed being done by researchers who are open minded in this way. I just wish there were more of it, and that the results more often crossed over to inform our understanding of consciousness in general.)

    There is this absurd need for humans to be “the best”. Even before we decide what features we’re going to compare in order to make this meaningful ranking.
    So much so that we try to construct tests and make definitions so that we’re guaranteed to win. Even then we manage to fail sometimes. I think a great example of this is the whole history of comparing brain sizes and shapes and looking at brain to body ratio, then encephalisation, and making definitional changes when we didn’t get the result that we wanted (which is for humans to come out on top). All this instead of stepping back and deciding what it is we’re really looking at, and rethinking what it even means to be conscious or intelligent or what cognition even is. (Of course, computers are already much better than us at all kinds of things. Just like many non-human animals have superior performance in some categories, like eyesight, for example. We are already surpassed in various categories and yet we remain.)

    There is this idea in cognitive science (sometimes referred to as a unifying theory) that cognition is information processing. I like this. But that’s pretty unsurprising because I’m a machine learning person and this conveniently gives me space to do the type of cognitive modelling that I enjoy. But it also really opens things up in general. Actually this leads to a huge derailment about embodied cognition where I enthuse about the idea of dance as information processing as cognition and so I’m going to resist that for now.

    Instead I will try to focus on one thing for the remainder of this.
    That one thing is ants.
    You mention ants, and there are a tonne of things that come out of that. To start, there is the parable of the ant, which I’m sure you’ve encountered before. It’s this idea of a presumably human observer looking down at an ant on the beach and seeing a very complicated pattern as the ant moves about. That is then countered with the perspective of the ant which reveals that the ant is actually behaving very simply and consistently, but the surface of the beach is complex and highly varied to something as small as an ant. The seemingly complex behaviour is actually coming from the environment instead of from the organism itself. I think this does a pretty big injustice to ants in that I think ants are rather more complex than this parable gives them credit for. But the idea is still: how do you separate out what is coming from the individual and what is coming from their environment; when there are other organisms in the environment, how does that factor in; how do you determine what you can attribute to individual humans and what is something that would need to be attributed to a group of humans and their environment; how does that relate to how we view ourselves in relation to other organisms or entities and how we decide things about them? It’s not uncommon for people to list human accomplishments when justifying their view that humans are “the best”, but if you do this then you have to think about humans collectively, and as a distributed system. Because as an individual human, I’m certainly not responsible for having constructed the city I live in, for instance. (Or really much of anything, frankly. I’ve made very few things.)

    Which leads us back to systems of things and their collective power.
    And more ants.
    Ants are not something that we could easily defeat.
    Actually, I don’t think we could fully defeat ants, easily or not. I don’t think we could remove them from an environment without making it unliveable for ourselves along the way.
    There are huge numbers of fire ants in some places. They’re fascinating, and hard to completely track, and they do sometimes kill people. They are attracted to electromagnetic fields and do all kinds of damage to our technology. A single ant versus a single human is no problem, but there is never just one; they operate as a distributed system. As far as I understand it, the most likely thing to eliminate them in any particular setting is an invasion of crazy ants. Both types of ants damage technology; fire ants are attracted to electro magnetic fields, and crazy ants create shorts when they build nests in electrical boxes and other technology. It’s the sheer number of them that lets them have such a large impact. Which is really how disease works too. Disease is another case of something that is small and distributed to which we are incredibly vulnerable.

    So I disagree with Musk and Hawking that machine consciousness poses our biggest threat. I think an airborne disease is much more dangerous. The flu already kills plenty of people every year. A more intense version of that could be really destructive. Climate change is a much bigger concern than machine consciousness. This is a big part of why Musk and Hawking’s views are dangerous; they distract from the real concerns. They are also contributing to a fear of technology, when a more reasonable fear is running out of drinking water. Or killing off enough pollinators that we aren’t able to grow enough food to feed ourselves. Or simply unforeseen fallouts of climate change. We’ve done a lot of damage. And we’ve done that with comparatively simple technology. We need to be more aware and ethical in our approach to technology across the board.

    I don’t see why we’d be very interesting to a being or collection of beings or cooperative of beings or distributed consciousness that utterly outstripped our cognitive capacities.
    I think there is a fair bit of human arrogance involved in fearing conscious machines with an understanding of the world that far outstrips ours. Even as a source for parts, I don’t think humans are very valuable. I think advanced entities will be about as concerned with us as we are with ants. Which is to say that if we collectively got in its way then maybe it would bother to get rid of some of us but otherwise why would we be worth harming? I don’t think humans being ignored is the same thing as humans being destroyed.

    However, it is completely worth thinking about how we should make something in a responsible way. It is definitely, as you say, complicated and difficult, but that doesn’t mean it isn’t worth doing. I think it’s really useful to sit down and think about what it is we’re developing and to think about what is ethical, and to approach things with an aim to do no harm. Avoiding harm requires deliberate effort, because we do plenty of harm even when we don’t mean to.

    I think the concern is more about what humans themselves are doing with, say, the huge quantities of data we can now access and process. I think it is humans wielding computationally powerful machines that we need fear more than the machines themselves.
    Also: If we make something that is self aware, then I think a more realistic concern is protecting the thing itself. We need to think about what it means to interact ethically with a new kind of consciousness and our responsibility towards determining if it can experience something recognisable as pain or distress and our obligation towards protecting it from abuse.

    For those that still fear the prospect of these new types of conscious entities, maybe the answer is the same anyhow. Treat them well. Protect them. Respect them. That way you’re leading by example as they develop and leaving no reason to hate us when they pass us.

    Thanks for sending me your thoughts and inviting mine. I really enjoyed reading this, and I’m looking forward to reading your final piece.

    Nerdily yours,

    V

    p.s.
    BBC clip about fire ants
    Crazy ants

     
    1. flamingpioneers reblogged this from fhrevue
    2. ksanzo reblogged this from fhrevue
    3. duncanisideal reblogged this from fhrevue
    4. iamtoothandclaw reblogged this from wolvensnothere
    5. wolvensnothere reblogged this from wolvensnothere and added:
      Seriously, go listen to/read this, now.
    6. zombiearmadillo reblogged this from winneganfake
    7. ekhidne reblogged this from winneganfake
    8. winneganfake reblogged this from wolvensnothere
    9. fhrevue posted this