(((I'm really enjoying this, even though I believe that "Artificial
Intelligence" is so far from the ground-reality of computation that it ought to be dismissed like the term "phlogiston.")))
Link: Open the Future: Singularity Summit Talk: Openness and the Metaverse Singularity.
"We can imagine parallel examples of the ways in which metaverse technologies could be shaped by deeply-embedded cultural and political forces: the obvious, such as lifelogging systems that know to not record digitally-watermarked background music and television; the subtle, such as augmented reality filters that give added visibility to sponsors, and make competitors harder to see; the malicious, such as mirror world networks that accelerate the rupture between the information haves and have-nots – or, perhaps more correctly, between the users and the used; and, again and again, the unintended-but-consequential, such as virtual world environments that make it impossible to build an avatar that reflects your real or desired appearance, offering only virtual bodies sprung from the fevered imagination of perpetual adolescents.
"So too with what we today talk about as a "singularity." The degree to which human software engineers actually get their hands dirty with the nuts & bolts of AI code is secondary to the basic condition that humans will guide the technology's development, making the choices as to which characteristics should be encouraged, which should be suppressed or ignored, and which ones signify that "progress" has been made. Whatever the degree to which post-singularity intelligences would be able to reshape their own minds, we have to remember that the first generation will be our creations, built with interests and abilities based upon our choices, biases and desires.
"This isn't intrinsically bad; emerging digital minds that reflect the interests of their human creators is a lever that gives us a real chance to make sure that a "singularity" ultimately benefits us. But it holds a real risk. Not that people won't know that there's a bias: we've lived long enough with software bugs and so-called "computer errors" to know not to put complete trust in the pronouncements of what may seem to be digital oracles. The risk comes from not being able to see what that bias might be.
"Many of us rightly worry about what might happen with "Metaverse" systems that analyze our life logs, that monitor our every step and word, that track our behavior online so as to offer us the safest possible society – or best possible spam. Imagine the risks associated with trusting that when the creators of emerging self-aware systems say that they have our best interests in mind, they mean the same thing by that phrase that we do.
"For me, the solution is clear. Trust depends upon transparency. Transparency, in turn, requires openness.
"We need an Open Singularity...."