RAFAEL YUSTE
You’ve probably never heard of Rafael Yuste, but he’s working to defend your cognitive liberty.
• Your right to mental privacy
• Your right to fair access to augmentation
• Your right to freedom of bias
• Your right to personal identity
• Your right to free will

Yuste is a neuroscientist, professor at Columbia University and cofounder of the Neurorights Foundation, an advocacy group that protects people from the potential misuse or abuse of neurotechnology.
Yuste’s main goal is to establish legal and ethical frameworks for consumer brain-to-computer interfaces (BCIs) that support its beneficial uses, while also protecting fundamental human rights.
In partnership with international human rights lawyer Jared Genser, Yuste has already made headway. He’s collaborated with the government of Chile to amend its Constitution; now the first country in the world to protect the brain activity of its citizens. He’s also helped the Brazilian state of Rio Grande do Sul amend its constitution, and assisted Colorado and California to instate neuro data protection laws.
“It is very good news for humanity, for medicine, to help in clinical cases,” states Yuste at the Congreso Futuro in Morocco.
“But it’s also worrying, because the brain is not just another organ of the body like the kidney or the spleen or the lung. This organ generates all our mental and cognitive activity. I’m not just talking about all our thoughts, obviously our language, our memories, our emotions, our decisions—all our behavior—our conscience, our identity, our personality, and even the subconscious.
Everything is generated by the brain. So, if brain activity can be decoded, then mathematically it will be possible to decode people’s minds.”
The issues surrounding consumer BCIs are complex, especially considering that bi-directional consumer BCIs are currently in development and these devices can not only read people’s thoughts (download) but upload information to the brain.
Yuste’s five neurorights have led to recently released documents by UNESCO and The United Nations General Assembly that raise questions such as:
-Can your employer monitor your state of mind?
-Can you brain data be commercially sold?
-Could neurogaming devices drive or diminish addictive behavior if reward circuits in the brain are targeted?
-What about cognitive enhancement of healthy people—could that led to discrimination or social inequality?
-Could people be punished for unexpressed thoughts or intentions? (aka pre-crime)
And then there’s the big one, as it relates to one’s sense of self—
How will it be possible to distinguish between one’s own inner dialogue—thoughts, emotions, beliefs—versus some tech-triggered electrical stimuli that’s influencing your brain’s activity?
Today, neuroimaging already uses inferences about our mental states—such as memories, emotions, dreams and intentions, and science has shown that implanting false memories is nothing new: it’s an established area of 50+ years of research led by cognitive psychologist Elizabeth Loftus called “The Misinformation Effect.”
But now, connecting brains to digital networks is no longer fiction. As the design, development, testing and deployment of consumer BCI devices grows, so will public awareness.
The ethical debates have just begun.
GEOFFREY HINTON
Geoffrey Hinton believes that AI will lead to human extinction by 2054.
Known as “the godfather of AI,” Hinton has received the Noble Prize in Physics, The Honda Prize, and The Turing Award.
Since his exit as VP and Engineering Fellow at Google Brain/Google AI (2013-2023), Hinton has emerged as a sentinel for society, forewarning the public of the dangers of AI. In an interview with BBC Newsnight, Hinton says:
“It’s going to lead to a big increase in productivity which leads to a big increase in wealth—and if that wealth was equally distributed that would be great—but it’s not going to be.
That wealth is going to go to the rich and not the people whose jobs get lost, and that’s going to be very bad for society, I believe. It’s going to increase the gap between rich and poor.”
Hinton also explained at the AI for Good Summit that GPT4 already “knows more than any human.” Due to AI’s digital nature (its ability to learn and share knowledge much more efficiently than humans can) AI has become “a better form of intelligence.”
Hinton has even proposed that AIs can have a subjective experience, which in turn, would mean they’re conscious.
“Most people have a view of the mind as an internal theatre, and they’re just wrong,” says Hinton.
“Do you believe that AI systems can have a subjective experience?,” asks The Atlantic’s CEO Nicholas Thompson.
Hinton responds: “I think they already do.”
SOURCES: Rafael Yuste
https://www.nature.com/articles/551159a
https://www.nature.com/articles/551159a.pdf
https://www.youtube.com/watch?v=kecLyF4oxjA
https://www.youtube.com/watch?v=4W3TKJBjmuY
https://www.youtube.com/watch?v=ffczS2BEWoU
https://ntc.columbia.edu/rafael-yuste/
https://neurorightsfoundation.org/
https://neurorightsfoundation.org/people
https://neurorightsfoundation.org/reports
https://documents.un.org/doc/undoc/gen/g24/133/28/pdf/g2413328.pdf
https://unesdoc.unesco.org/ark:/48223/pf0000391074
https://leg.colorado.gov/sites/default/files/documents/2024A/bills/2024a_1058_ren.pdf
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1223
SOURCES: Geoffrey Hinton
https://x.com/geoffreyhinton/status/1719447980753719543
https://www.youtube.com/watch?v=dNjClDI6zT4
https://aiforgood.itu.int/summit24/
https://www.youtube.com/watch?v=Es6yuMlyfPw
https://www.youtube.com/watch?v=MGJpR591oaM

