What Is It Like to Be a Robot?

In “Metazoa“, Peter Godfrey-Smith explores the rise of consciousness in animals – from simple multicellular organisms to invertebrates like us.

Consciousness is concept that’s not so easy to capture. It’s about a sense of self, about a perception of the environment and oneself, about a subjective experience of the world. When does an animal qualify as conscious? Godfrey-Smith postulates that consciousness is a spectrum, not something one has or doesn’t. The analogy he uses for this is sleeping, or the state right after waking up. We are conscious, but with a different level of consciousness as when fully awake.

The nature of consciousness can be explored by taking extreme positions:

  • can you be conscious without any perception of the environment (a “pure mind”)?
  • does reacting to what happens around you without any emotion qualify a conscious?
  • do you need to have a nervous system and feel pain to be conscious, or is having a mood enough?
  • could you be conscious, but act indistinguishably from as an unconscious animal?

I would have described consciousness as being aware of one’s own existence, something related to mortality, and rather binary. Godfrey-Smith equates consciousness more to having a sense of self and feelings, which makes it something less demarcated. He’s using consciousness more like “awareness“, whereas I would use it more like “self-awareness“. (That said, even self-awareness isn’t maybe so binary. Between being aware of deadly dangers and being aware of your own existence, it’s hard to say when we transition from instinct to consciousness.)

The book focuses on the relationship between senses and consciousness. Godfrey explains in the book how various animals sense the world and which kind of consciousness they might have. Some animals have antennas (Shrimps), some have tentacles (Octopus), some feel water pressure (fish). Many animals have vision, but the eye structure can differ. Some animals feel pain (mammals, fishes, molluscs) , but some don’t (insects) – it’s however not so clear to define when pain is felt or not. Not feeling pain doesn’t mean the animal is unaware of body damage, just like you don’t feel pain for you car but notice very well when something is broken when driving.

The book reminded me of “What it’s like to be robot?” from Rodney Brooks. This article, unsurprisingly, references the previous book from Godfrey-Smith “Other Minds”. The article from Rodney makes parallels between the perception of octopus and artificial intelligence systems. Many of the questions raised by Godfrey-Smith about the animal world can indeed be translated directly to the digital world. Computer systems have sensors, too. The have rules to react to inputs and produce outputs. They can learn and remember things, and develop an individual “subjective” perception of the world. They don’t “feel” pain, but can be aware of malfunctions in their own system. Does this qualify as a very limited form of consciousness?

The book touches at the end on the question of artificial intelligence, but very superficially. Rather than wondering whether an artificial intelligence could be conscious, he focuses on refuting the possibility of human-like artificial intelligence. His argument is basically that neural networks do model only a subset of the brain’s physical and chemical processes and can’t thus match human intelligence (there are other physical and chemical processes at play in the brain besides synapse firing). He also argues that an emulation of these processes still wouldn’t cut it, since it wouldn’t be the real stuff.

Artificial intelligence will not have a human-like intelligence, though. Each system (biological or digital) has its own form of intelligence. Because of his anthropomorphism of artificial intelligence, Godfrey-Smith doesn’t explore the alley of consciousness in AI systems much deeper. This is unfortunate, because with his consciousness-as-spectrum approach, it would have been an interesting discussion.

More

Practices vs Principles

It struck me when reading Scaling the Practice of Architecture that people often use the term “principle” in a sloppy way:

There is a great deal I could write here about bad architectural principles but I’ll stick to the key aspects. Firstly, they are not practices. Practices are how you go about something, such as following TDD, or Trunk Based Delivery, or Pair Programming. This is not to say that practices are bad […] they’re just not architectural principles.

I’ve probable been using the term in a wrong way more than once. Principles don’t tell you exactly how to do something. They are just criterions to evaluate decisions. All things being equal, take the decision that fulfills the principle the most. Examples of well-known design principles are for instance

  • Single-responsibility principle
  • Keep it simple, stupid
  • Composition over inheritance

A practice, on the other hand, is a way of doing something. Examples of practices are:

  • Pair Programming
  • ​​​​​​​Shift left with CI/CD
  • Limit Work in Progress (WIP)

A lot of documents confuse the two. For instance, the SAFe Lean-Agile principle are actually mostly practices.

It could look like principles are for software design and practices are for software delivery. But you can have principles for software delivery, too. For instance, “maximize autonomy” could be a delivery principle. It doesn’t tell you how. It just tell you that if you have two options to design the organization, you should go the the one that maximizes autonomy. On the other hand, a software design practice could be to “model visually”.

Another confusion in this area come with another term similar to principles and practices: values. A value is a judgment of what we consider important. Usually they define behaviors and are then adjective (but “profit” could be a value yet isn’t an adjective). “Autonomy”, could be for instance a value. A value embodies implicitly the principle of favoring this value over others. For instance, if you value “autonomy”, you will automatically follow the principle “maximize autonomy”. If you adhere to a value, the corresponding principle comes for free.

Finally, there are “conventions” and “guideline”. Conventions tell you how to do things exactly and are mandatory. You can check if you adhere to a convention or not. This is unlike principles or practices, which have room for interpretation. A guideline is like a convention, but optional. Examples of convention or guidelines are:

  • Interfaces are versioned
  • Sanitize all inputs
  • Limit WIP to 3

Using a full example of value/principle/practice/guideline with in one area, we could have

  • value: resilience
  • principle: tolerate failures
  • practice: chaos testing
  • guideline: use tolerant reader

Granted, no matter how we try to distinguish the terms from one other, there will be some overlap in some cases. Natural language is messy. But I think it’s worth using the terms in the most appropriate ways if possible. It helps create a mental model that works. If you mix practices, principles, value and guidelines together, people might not notice immediately, but it creates a cognitive friction that makes it harder to actually apply underlying ideas.

The Superpower of Framing Problems

Some problem we work on a concrete. They have a clear scope and you know what has to be solved exactly. Sometimes, problems we need to address are however muddy, or unclear.

When something used to work, but doesn’t work any more, the problem is clearly framed: the thing is broken and must be repaired. However, if you have someting like a “software quality problem”, the problem isn’t clearly framed. Quality takes many form. It’s unclear what you have to solve.

To explore solutions you need first to frame the problem in a meaningful way. With this frame in place, you can explore the solution space and check how well the various solutions solve the problem. Without a proper frame, you might not even be able to identify when you have solved your problem, because the problem is defined in such a muddy way.

The “quality problem” mentionned previsouly could be reframed more precisely for instance as a problem or reliability, usability, or performance. It could be framed in terms of the number of tickets open per release, or about the time it takes to resolve tickets.

Depending on how you frame your problem, you will find different solutions. Using the wrong frame limits the solution space, or in the worst case, means you will solve the wrong problem. It’s worth investing the time to understand the problem and frame it correctly.

If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.– Albert Einstein

I’ve talked up to now about framing problems. Framing does however work even in a broader sense and can be used each time there is a challenge or an open question. Each time you should come up with a solution, there is some framing going on.

Something interesting about framing is, that in itself, it isn’t about proposing a solution. It’s about framing the solution space. As such, people are usually quite open to reframing problems or explore with new frames. Whereas if you propose solutions, you can except heated discussions, when it’s only about framing, usually the friction with other people is pretty low. While framing in itself is not a solution, it does however impact the solution that you will find. When people don’t agree on some solution, usually, people have different implicit frames for the problem. Working on understanding the frames is sometimes more productive than debating the solutions themselves.

A second thing interesting about framing is that you don’t need to be an expert in the solution to help framing problems. You need to be a an expert in the solution space, but not the actual solution. Going back the the example of “software quality problem”, you can help with framing if you know about software delivery in general. You don’t need to be a cloud expert or or process expert. This means that good framing skills are more transferable than skills about specific solutions.

I wrote long time ago about using breadth & depth to assess whether a thesis we good. In essence, this is a specific frame for the problem of thesis quality. Finding good frames for problems helps in many other cases. Framing problems is a great skill to learn.

SAFe: Systems Thinking

I was pleasently surprised to see Systems Thinking as principle #2 in SAFe. I recently came in contact with systems thinking when reading Limits to Growth, which explores the feedack loops in the global economy. Donella Meadows is also the author of Thinking in Systems, which addresses more generally how to understand complex systems dynamics with such feedback loops (the book is in my list of to-read).

This is the definiton of systems thinking according to SAFe:

Systems thinking takes a holistic approach to solution development, incorporating all aspects of a system and its environment into the design, development, deployment, and maintenance of the system itself.

It’s quite general. But arguably, there isn’t one definiton of systems thinking. If you read Tools for Systems Thinker, the study of feedback loops is only one aspect of systems thinking. The more general theme is to understand the “interconntedness” of the elements in the system.

A system is a set of releated components that work together in a particular environment to perform watherver funtions are required to achieve the system’s objective. – Donella Meadows

Principle #2 in SAFe is about realizing that the solution, but also the organisation, are complex systems that benefit from systems thinking.

Interestingly, Large Scale Scrum (LeSS) also has systems thinking as principle. It’s more concrete than the equivalent principle in SAFe. The emphasis is on seeing system dynamics, espectially with causal loop diagrams. The article is a very good introduction to such diagram. Here’s an exmaple of a very simple causal loop diagram:

systems thinking-7.png

I like the emphasis on actively visualizing system dynamic:

The practical aspect of this tip (NB: visualizing) is more important than may first be appreciated. It is vague and low-impact to suggest “be a systems thinker.” But if you and four colleagues get into the habit of standing together at a large whiteboard, sketching causal loop diagrams together, then there is a concrete and potentially high-impact practice that connects “be a systems thinker” with “do systems thinking.”

The idea is that it’s only when you start visualizing the systems dynamics that you also start understanding the mental models that people have, and only then can you start discussing about improvements.

I like the more concrete way to address system thinking in LeSS as in SAFe. Recently, I discussed with our RTE about some cultural issue related to knowhow sharing. Using a causal loop diagram would have been a very good vehicule to brainstom about the problem. I think I will borrow the tip from LeSS and start sketching such diagrams during conversations.

The Brain and Probabilities

The brain is a wonderful machine with an impressive computing power. We can make sense of complex information effortlessly and almost instantly. But it has one big flaw: it does not understand probabilities.

When presented with information, the brain tries to explain it by building a coherent story out of it. To do so quickly, it relies on some shortcuts, which largely ignore probabilities. So the story you get isn’t necessary the most probable, but instead the cheapest it could construct, as long as the story remains plausible.

One of the shortcuts is to trade availability of information for probability (availability heuristic): the information you can recall quickly is deemed more probable than other information. As a result, the probability of sensational events inflates while the probability of mundane events shrinks.

Another of the shortcuts is to only consider what is visible and extrapolate from there (all there is is what you see): only the visible information is considered, without even considering that something could be missing from the picture. Somebody looking nice will be considered a nice person, unless additional negative information about him is given.

The brain tries so hard to build a story that it will see patterns even in random data. It will infer causality very quickly, and with very little. As Kahneman puts it, the brain is “a machine for jumping to conclusion”.

It important to remember the dinstinction between plausible and probablewhen it comes to judgment, because we’re taking decisions all day long.

You hear a project was using a new methodology and was very successful with it so you want to use it as well? Beware the survivorship bias. You don’t know how many projects used the methodology and failed…

You think the biggest risk in your project is a distributed attack from China? Beware the availability heuristic. Your biggest risk might be to not have proper input validation…

You see bug reports for your teams and start detecting a pattern? Beware the law of small number. Your sample size might be too small…

Our brain is hardwired to tell us stories. So it’s very hard to improve on our handling of probabilities. Often, we’re simply not aware that probabilities are at play. And even when we are, it’s really hard to change our instinct in some cases: if you flip 9 times a coin and  got 9 times head, the probability to get tail on the 10th flip is higher, right? Well, actually not. But it’s hard to not feel otherwise.

So, the best remedy is to default to a healthy skepticism and accept that the outcome of many situations in life is simply the result of chance. It might sound like fatalism but it isn’t at all. Your actions will influence the outcome of many situations, but you don’t know how. Don’t buy the first plausible explanation your brain tries to sell you, it’s probably not the right one.

References 

Lateral Thinking

Lateral thinking is a term coined by Edward De Bono to characterize the generation of alternative ideas, as opposed to vertical thinking, which generates ideas based on logic and stepwise refinements. Another way to explain lateral thinking in a much common way is “thinking out of the box.”

Often finding the best solution to a problem requires a creative move to go away from the existing solution and start with a new angle. This is were lateral thinking can help.

As a reminder of the power of lateral thinking, let us take an egg and a spoon. You are doing a brunch. How do you provide assistance to help cut the egg?

With vertical thinking you might come up with this solution:


With lateral thinking, maybe with this one:


I was absolutely amazed the first time I saw this device in action. The cut is perfect. Also, I would probably never have come to this solution, no matter how long I stared at my egg.

Each time I discuss a design issue I remember my last brunch and try to take some distance with the situation to go back to the root of the problem to solve and ask: could we do this completely differently?

Sometimes the best way to cut an egg is to not cut it actually.

The Zen of Oscar

Writing a good research paper or thesis is hard. It can be very intimidating to figure out the scope of the work needed to be done. Thanks to Oscar, I have been empowered with three adjectives to  think about problems of scope. These adjectives were: breadth, depth and completeness.

Breadth and Depth

At the early stage of my thesis, I prepared a list of action items that I proposed to address for my research. I went to see Oscar for a confirmation that addressing these points would lead to a good thesis. I described him the list of action items and asked “Oscar, if I realize all this, will this make a good thesis?”. He look at me and said “For a good thesis, you must cover a topic with sufficient breadth and depth“. I came back to my office, confused and worried.

What I had failed to understand is that a thesis needs a frame. A frame has a breadth, and depth. The breadth characterizes the perspectives on the idea, and the depth the level of details. Say, your thesis is about dynamic updates. Technical feasibility and user adoption are two perspectives that belong to the breadth dimension. Implementation and formalization are different levels of details that belong to the depth dimension.

Different pieces of work have different ratios of breadth and depth. A position paper might have lots of breadth, but little depth. A paper proposing an optimization of an algorithm has little breadth, but lots of depth. For a thesis, you need a good ratio of both.

Completeness

Later on, I was once invited to prepare an extended version of a conference paper for a journal. I prepared a draft of the changes and went to see Oscar. I asked “Oscar, are my changes enough for an extended journal paper?”. He looked at me and said “For a journal paper, you must do what is necessary for your research to be complete“. I came back to my office, confused and worried.

What I had failed to understand is that maturity of research is not defined by how much has been done in terms of effort, but by the actual amount of speculation left. Research is an incremental process, and a piece of research is complete when all that was needed to support the claim (results or analysis) has been provided. Let’s imagine that you have an implementation of a dynamic update algorithm that you claim is efficient. You must show efficiency in speed and memory use for the research to be complete.

Thinking in terms of breadth, depth and completeness has become a simple technique in my toolkit of planning methods.

Mind Blown

There’s lots of things to learn and know. Some are funny trivialities, some are joyful discoveries, some are intriguing theories, some are insightful lessons, … and some are mind-blowing revelations.

Here’s my top 10. Some of them still blow my mind!

Things are only impossible until they’re not. — Jean-Luc Picard

Have fun!

  1. Public-key cryptography
  2. Lamport’s bakery algorithm for mutual exclusion
  3. Meta-circular evaluation and homoiconicity
  4. Storing state with flip-flop
  5. 0.999… = 1
  6. Non-Euclidean Geometry
  7. Imaginary numbers
  8. The necessity of axiom of choice
  9. Fixed-point combinator
  10. Escape velocity

Fun with iTune Shuffle and Probabilities

I recently tagged and imported all my mp3 into iTune. I noticed then that there were lots of albums that I had only partially listened to and I decided to use the feature “Party Shuffle” to listen to my library randomly and eventually hear all the songs.

After a couple of weeks, I observed that some songs would reappear in the playlist and were picked twice. Over the weeks the frequency of “re-entry” songs increased with the direct consequence that new music was played less and less. Even though I had already realized that it would not be possible to hear all the songs with approach, I was still surprised by the “re-entry” rate, which I would have intuitively expected to be much lower.

I turned to probability to better understand the situation.

Let’s n be the size of my library. After t songs played randomly, the probability that a given song was played at least once is:

P( song played at least once ) = t / n.

Absolutely not! This probability can be computed with 1 – probability that the song was never played. This gives:

      P( song played at least once ) = 1 – (( n-1 )/ n)  ^ t

More generally, the probability of a song having been played x times is given by the function

P( x ) = (1/n)^x * ( (n-1) / n )^(t-x) * C ( n, x  )

Where C(n,x) is the number of possible permutation. The expanded

P( x ) = (1/n)^x * ( (n-1) / n )^(t-x) *  n! / (n-x) ! x!

Note that the probability that the song was never played (x=0) is still (( n-1 )/ n)  ^ t.

After t songs, the sum P(0) + P(1) + … + P(t) = 1, which proves that the formula is correct.

The average number of songs played in the library after t songs, can be computed with

Avg. played

 = n * P( song played at least once )

= n * ( 1 – ((n-1)/n)^t ) = n – (n-1)^t  / n^(t-1)

The “re-entry” rate, or the probability of hearing a new song can be computed with (n- avg. played) / n which is equivalent to the probability that a given song was never played P(x=0).

The graph bellows shows the probability that a song was never played for a library of 500 songs, after 0, 50, 100, etc. songs. It’s interesting to notice that the probability of new songs fall below 50% after about 300 songs.

iTune_probability