Emotions Noise & Optimality

A commentary on life, alignment and optimality from the perspective of emotions

2/3/20252 min read

How will we achieve AI alignment?

A few thoughts:

  1. Alignment perhaps isn't going to happen via reasoning, something smarter than us will always out-reason us

  2. We should probably adapt and align to the values of a super-intelligence? Ideally the hope being the system will take rational decisions, far superior than collective human intelligence

  3. And finally, if you ask me, the only way to align a super-intelligence is to make the system feel human emotions and let the system align itself. An emotion module (Similar in nature to what MemGPT did for memory) might just solve alignment without even explicitly being the desired objective.

But aren't emotions noise?

A lot of prevalent views of emotion render it as noise or "glitches" emergent in our brain network inside the organic weights & biases of neurone. Numerous human endeavours are aimed solely at improvement and optimality based on reducing this signal-to-noise ratio. The underlying assumption here is that reducing emotional decision making or emotional thinking leads to overall better outcomes in life.

What if?

What if reducing the noise down to 0 isn't the optimal path? What if some noise is beneficial for optimality?

Instead of attempting to reduce the noise to zero( which is the inherent bias a lot of people have towards noise), we search for the requisite level of noise in the algorithm which optimises for the target objective we desire.

Note: Emotions aren't quantifiable they say. But let's for a moment assume we have a method to measure this perceived emotional noise as a number. Call it Emotion Density.

A brief detour

At a previous workplace, I was working with a customer at MIT who was testing an optimisation algorithm in the clustering space. He had reverted to us claiming that certain functions in our optimisation library weren't working as expected, as when he increased noise levels in his algorithm, the time and compute taken towards optimality was resulting in a hockey stick shaped curve. First down and then up.

The implication was clear. Optimality in this case ( less compute, hence time) was best achieved not at 0 noise but at the right level of noise. We couldn't resolve the case back then, but that idea stuck with me.

A paper in Nature discusses this idea:

Appropriate noise addition in meta algorithms leads to faster convergence

Why do this?

I view emotions as signals that direct us over the course of a lifetime. Whereas rationality directs us over smaller time periods.

In general here's my hypothesis:

  1. The dimensionality of the vector space of emotions is higher than the dimensionality of reasoning space(thoughts)

  2. Emotions are projections of vectors in the emotion space into the lower dimensional thought space and manifest as noise in brain signals

  3. All rationality perhaps is a by product of emotional coping mechanisms

Few interesting questions

  1. Can I rigorously determine the dimension of emotion space and thought space?

  2. Can I perhaps measure the "Emotion Density" of an evergreen song quantitatively? (Under the assumption that timeless evergreen songs, have higher Emotion Density)

  3. Is Emotion Density related to Axiomatic Density?

  4. Is it all related to Information Density?

Key Takeaways - TLDR

  1. Emotions Could Be Key to AI Alignment

  2. Optimality Might Require a Certain Level of Noise (Emotions) Rather Than Eliminating It Completely

  3. Emotion and Thought Exist in Different Dimensional Spaces, with Emotion Potentially Being Higher-Dimensional. Could we measure Emotion Density of an entity?