I like your MO: it plants a solid foot forward for our common understanding while also presenting options for the next step.
(a) The argument to make AI human-like is necessary: AI is exactly like a mirror in front of which humans stand to decide if they like what they see. Its training data is everything presented and hopefully also presentable: golden datasets. Popular datasets become the defacto answers to further generations that distill cleverer answers in a gradient descent into a new intelligence median of "the least you should know"; an honor previously held by being able to google, own an encyclopedia, know which way is "up" in the water, or know which way is danger in the Earth's shadow.
Arguably the path to intelligence is not just distinct from but identical to the evolution of light, matter and life.
The theoretical sum of all human knowledge, or its more realistic hoarded accretion of selected fragments by any number of billionaires can only result in a new high bar for what the average person is expected to know and be able to do.
Much better answers have been available to all humans who have spent significant amount of time digging into an interesting discipline, as I know and admire in you. I think science is misunderstood: if you practice it, you understand something beyond belief. If you don't, you may only know it through its dogmatic and often gate-kept body of information, or not at all but transparently through the use of everyday technology.
What I hear you saying is definitely that the better our questions, the better the answers.
(b) evolution necessarily pushes minds in the same direction as the eye: whether to know where "up-food" is in the ocean, or to know what "night-unsafe" means; minds are nowadays searching for that bit of light-knowledge in our reflective ocean of human writings. GPT arrives to present a new mirror for humans, just like the new Webb space telescope to present us with a deeper image of the further past. The machine-makers ponder and help drive that focus-evolution into what is possible. And so do the deal-makers, story-makers, and everyone else to an incremental degree in the planet until AI's kin become as ubiquitous as the search engine.
The internet may break its language barriers soon, and language models may after all save us from drowning in notifications/ads/spam.
What will happen with improving LLMs? I see a mirror that helps people present themselves better. I see the tantalizing opportunity to stare at our mind-language complex. As you mention in your follow up post, there is plenty to be hopeful for. More so because the language barrier of computer language is being broken, and hoards of humans joining ranks of makers. To be hopeful is to know how much this new eye is our collective inner reflection: it opens so many good questions!
You've brought up many excellent points, Tyler, and sparked a series of intriguing questions:
Is superintelligent the same as super-meaningful? We don't have a universally accepted definition of "intelligence", which likely leads to us having diverse discussions under the umbrella of AI.
Does awareness solely stem from our brains' physical processes? Is it feasible to recreate the intricate complexities of our biological and cultural heritage to produce an entity with inner life? And why should this AI entity mimic human thought processes unless we specifically program it to do so?
And when we ask "how dangerous is AI really?", L.M. Sacasas provides substantial food for thought:
"Build a techno-social system which demands that humans act like machines and, lo and behold, it turns out that machines can eventually be made to displace humans with relative ease."
It's very important for us to talk about these things, especially now as our polarized media makes it increasingly difficult to feel like a community. I'm glad you and your team are leading these discussions.
I appreciate your great links and questions, Francisco!
What is awareness / intelligence / consciousness? I don't fully know, and I suspect no one really does, yet. But I think we can one day have much better answers to those questions — that there _are_ answers, scientific ones.
I particularly like your question, paraphrasing it a bit: Why would an AI mimic human thought unless we designed it to?
I think your point is to say, we shouldn't expect a strong similarity between us if we have not designed it. I partially agree and partially disagree (which is basically my MO). I agree that you can't expect very cool results to happen for free; for example, it seems unreasonable to expect an AI to suddenly become more intelligent than us if it starts out the same as us, and we know our own intelligence level has trouble getting past where we are right now. At the same time, there are two kinds of gravity that might lead AI to be somewhat like humans: (a) we don't know of anything else to make AIs like — we are our only point of reference; and (b) it might be the case that evolution itself pushes minds in a certain direction, with a convergence or stable point around human-like thinking. The argument for (b) would come from an analogy with non-mind evolution: many body parts like eyeballs or legs are fairly standard across species, and not only because the different species are related, but because the commonly-evolved things (eyes, legs, etc) work well! It's possible human-like brains also work well like this.
There's some speculation in there. I'm curious to see what happens with improving LLMs!
I like your MO: it plants a solid foot forward for our common understanding while also presenting options for the next step.
(a) The argument to make AI human-like is necessary: AI is exactly like a mirror in front of which humans stand to decide if they like what they see. Its training data is everything presented and hopefully also presentable: golden datasets. Popular datasets become the defacto answers to further generations that distill cleverer answers in a gradient descent into a new intelligence median of "the least you should know"; an honor previously held by being able to google, own an encyclopedia, know which way is "up" in the water, or know which way is danger in the Earth's shadow.
Arguably the path to intelligence is not just distinct from but identical to the evolution of light, matter and life.
The theoretical sum of all human knowledge, or its more realistic hoarded accretion of selected fragments by any number of billionaires can only result in a new high bar for what the average person is expected to know and be able to do.
Much better answers have been available to all humans who have spent significant amount of time digging into an interesting discipline, as I know and admire in you. I think science is misunderstood: if you practice it, you understand something beyond belief. If you don't, you may only know it through its dogmatic and often gate-kept body of information, or not at all but transparently through the use of everyday technology.
What I hear you saying is definitely that the better our questions, the better the answers.
(b) evolution necessarily pushes minds in the same direction as the eye: whether to know where "up-food" is in the ocean, or to know what "night-unsafe" means; minds are nowadays searching for that bit of light-knowledge in our reflective ocean of human writings. GPT arrives to present a new mirror for humans, just like the new Webb space telescope to present us with a deeper image of the further past. The machine-makers ponder and help drive that focus-evolution into what is possible. And so do the deal-makers, story-makers, and everyone else to an incremental degree in the planet until AI's kin become as ubiquitous as the search engine.
The internet may break its language barriers soon, and language models may after all save us from drowning in notifications/ads/spam.
What will happen with improving LLMs? I see a mirror that helps people present themselves better. I see the tantalizing opportunity to stare at our mind-language complex. As you mention in your follow up post, there is plenty to be hopeful for. More so because the language barrier of computer language is being broken, and hoards of humans joining ranks of makers. To be hopeful is to know how much this new eye is our collective inner reflection: it opens so many good questions!
You've brought up many excellent points, Tyler, and sparked a series of intriguing questions:
Is superintelligent the same as super-meaningful? We don't have a universally accepted definition of "intelligence", which likely leads to us having diverse discussions under the umbrella of AI.
Does awareness solely stem from our brains' physical processes? Is it feasible to recreate the intricate complexities of our biological and cultural heritage to produce an entity with inner life? And why should this AI entity mimic human thought processes unless we specifically program it to do so?
Emily Bender speaks really well about the idea of "people versus advanced AI", you can check out her thoughts here: https://medium.com/@emilymenonbender/talking-about-a-schism-is-ahistorical-3c454a77220f
And when we ask "how dangerous is AI really?", L.M. Sacasas provides substantial food for thought:
"Build a techno-social system which demands that humans act like machines and, lo and behold, it turns out that machines can eventually be made to displace humans with relative ease."
—https://theconvivialsociety.substack.com/p/render-unto-the-machine
It's very important for us to talk about these things, especially now as our polarized media makes it increasingly difficult to feel like a community. I'm glad you and your team are leading these discussions.
I appreciate your great links and questions, Francisco!
What is awareness / intelligence / consciousness? I don't fully know, and I suspect no one really does, yet. But I think we can one day have much better answers to those questions — that there _are_ answers, scientific ones.
I particularly like your question, paraphrasing it a bit: Why would an AI mimic human thought unless we designed it to?
I think your point is to say, we shouldn't expect a strong similarity between us if we have not designed it. I partially agree and partially disagree (which is basically my MO). I agree that you can't expect very cool results to happen for free; for example, it seems unreasonable to expect an AI to suddenly become more intelligent than us if it starts out the same as us, and we know our own intelligence level has trouble getting past where we are right now. At the same time, there are two kinds of gravity that might lead AI to be somewhat like humans: (a) we don't know of anything else to make AIs like — we are our only point of reference; and (b) it might be the case that evolution itself pushes minds in a certain direction, with a convergence or stable point around human-like thinking. The argument for (b) would come from an analogy with non-mind evolution: many body parts like eyeballs or legs are fairly standard across species, and not only because the different species are related, but because the commonly-evolved things (eyes, legs, etc) work well! It's possible human-like brains also work well like this.
There's some speculation in there. I'm curious to see what happens with improving LLMs!