I think of it like time dilation, such as near a black hole(see what I did there, tying the two singularities together).
From the perspective of one experiencing time-dilation nothing appears unusual, everything appears normal, it only from the outside perspective that things are strange.
As far as I can tell the singularity happened in the late 1700's. For thousands of years the collective economic growth of the world was effectively a straight shallow line, it grew, but slowly and linearly, then in the late 1700's something changed, it went exponential and everybody was along for the ride, and from the perspective of being caught up in this exponential growth it appears flat, normal even. but you look at history and wonder why every advance was so slow. and you look ahead and say the singularity is almost there. But we will never actually reach it. by the time we get there it is the new normal.
Humanity has been rapidly advancing throughout recorded history. We just gloss over advancements in outdated technology. Who cares about when exactly the stirrup was invented when we have cars. Medieval armor was vastly better than what was available in the Roman Empire, but it didn’t just suddenly jump to better there was a host of minor innovations.
The amazing complexity in rigging seen in the age of sail is built on a long line of innovation, but engines rendered it largely irrelevant etc. As such ~1700 isn’t some clear tipping point, just the horizon before which innovation seems less relevant.
I think the classic definition of the singularity though (per Kurzweil's book) is precisely when the curve actually doesn't look flat on human comprehensible timescales any more.
I.e. One day there are significant overnight changes. Then the very next day hourly changes, soon thereafter every minute, second, millisecond, etc.
It's like how talking about the "hockey stick curve" in an exponential growth line is nonsensical. It's everywhere and the curve angles you see depend entirely on what scale you view it from.
I think the mistake here is that there is a certain rate of progress where humanity can no longer even collectively process the progress and it is equivalent to infinite progress. This point is the singularity and requires non-human driven progress. We may or may not reach that point but full automation is a requirement to reach it. We may hit a hard wall and devolve to an s-curve, hit a maximum linear progress rate, hit a progress rate bounded by population growth and human capability growth (a much slower exponential), or pass the 1/epsilon slope point where we throw up our hands (singularity). Or have a dark age where progress goes negative. Time will tell.
The problem with the concept of "the singularity" it is has a hidden assumption that computation has no relationship to energy. Which, once unmasked, is a pretty outlandish claim.
There is a popular illusion that somehow technological progress is a pure function of human ingenuity, and that the more efficient we can make technology the faster we can make even better technological improvement. But history of technology has always been the history of energy usage.
Prior to the emergence of homo-sapiens, "humans" learned to cook food by releasing energy stored in wood. Cooking food is often considered a prerequisite for the development of the massive, energy consuming, brain of homo-sapiens.
After that it took hundreds of thousands of years for Earth's climate to become stable enough to make agriculture feasible. We see almost no technological progress until we start harvesting enormous amounts of solar energy through farming. Not long after this we see the development of mathematics and writing since humans now had surplus energy and they could spend some of it on other things.
You can follow this pattern though the development and extraction of coal, oil etc. You can look at the advancement of technology in the last 100 years alongside our use of fossil fuels and expansion of energy capabilities with renewables (which historically only been used to supplement, not replace non-renewables).
But technological progress has always been a function of energy, and more specifically, going back to cooking food, computational/cognitive ability similarly demands increasingly high energy consumption.
All evidence seems to suggest that we increasingly need more energy for incrementally smaller return on computation.
So for something like the singularity to happen, we would also need incredible changes in available energy (there's also a more nuanced argument that you also need smooth energy gradients but that's more discussion than necessary). Computation is not going to rapidly expand without also requiring tremendously large increases in energy.
Further it's entirely reasonable that there is some practical limit to just how "smart" a thing can be based on the energy requirements to get there. That is, you can't reasonably harvest enough energy to create intelligence on the level we imagine (the same way there is a limit to how tall a mountain can be on earth due to gravity).
Like most mystical thinking, ignoring what we know about thermodynamics tends to be a fundamental axiom.
There are hard limits for how much energy we can provide to computation, but we are not even close to what we can do in a non-suicidal way. In addition to expanding renewables, we could also expand nuclear and start building Thorium reactors - this alone ensures at least an extra order of magnitude in capacity compared to Uranium.
As for the compute side, we are running inference on GPUs which are designed for training. There are enormous inefficiencies in data movement in these platforms.
If we play our cards right we might have autonomous robots mining lunar resources and building more autonomous robots so they can mine even more. If we manage to bootstrap a space industry on the Moon with primarily autonomous operations and full ISRU, we are on our way to build space datacenters that might actually be economically viable.
There is a lot of stuff that needs to happen before we have a Dyson ring or a Matrioska brain around the Sun, but we don’t need to break any laws of physics for that.
Don’t forget the practical ability to dissipate waste heat on top of producing energy. That’s an upper limit to all energy use unless we decide boiling ourselves is fine, or find a way to successfully ignore thermodynamics, as you say.
We can always build a sunshade in the Earth-Sun L1. Make it a Sun facing PV panel pointing radiators away from us and we can power a lot of compute there (useful life might be limited, but compute modules can be recycled and replaced, and nothing needs to be launched from Earth in this case).
I think this is accurate. However, this does not mean that the exponential isn't real, it just isn't sudden. We have been living through continuously accelerating technological and economic growth our whole lives, and things really do happen much faster now than they did in the past.
For example it took centuries for indoor plumbing to be widely adopted, and less than a decade for smartphones. It took hundreds of thousands of years to get the first billion people (~1800), but the eighth billion happened in eleven years (2011-2022).
The initial part of an S-curve looks a lot like an exponential. The final part doesn't.
Finding the second and the third antibiotic for non resistant bacteria may be fast and easy, finding another three antibiotics for resistant bacteria decades later is now crazy hard, as bacteria evolved to resist everything that doesn't also kill humans.
Eh, sure, we'll hit limits eventually. We appear to be pretty far off from hard limits like thermodynamics though, and the world after we hit those limits could look very science-fiction.
For antibiotics specifically, we will probably find other ways to fight bacteria even if we never discover another chemical antibiotic. As one technology S-curves, another technology replaces it.
> As one technology S-curves, another technology replaces it
Even if for no other reason than us abandoning a diminishing returns approach looking for other alternatives.
We have been kind of at the end of the rope for silicon for quite some time now and we found increasingly heroic ways to protect our investment in silicon based semiconductors, but silicon is not the only option - it’s just the one we have a lot of supply chains already set up.
Population is a particularly good example: just decades ago it seemed like we were barreling toward an overpopulation crisis. I'm aware that some people think we're beyond the carrying capacity of the Earth long-term. But there seems to be a broad consensus that birth rates are declining everywhere without a food crisis or other immediately visible calamity.
Population appears to be on a droopy S curve. The preposterousness of those space data centers and the fact that we don't have a theory of consciousness makes it seem plausible that AI could also not continue to rocket ahead.
>The preposterousness of those space data centers and the fact that we don't have a theory of consciousness makes it seem plausible that AI could also not continue to rocket ahead.
The rate of datacenter construction in the last few years exceeds Moore's law and is almost certainly unsustainable. 'Only' 2x improvement every 2 years would seem relatively slow compared to what's happened recently.
However, I expect AI will continue to advance over the coming decades even once the bubble pops. They're clearly on to something with neural networks.
This is an excellent example to illustrate an S-curve. There is a certain amount of energy in a photon. It cannot be emitted with less energy. There is 100% efficiency barrier that cannot be surpassed no matter how smart you are.
I think we can stop building new streetlights at the moment we have full daylight illumination on the visible spectrum 24x7 in urban areas. We’ll probably settle for much less and be happy with that.
If we need more light, we can deploy more power generators.
The etymology and physical metaphor of "The Singularity" are a bit confused here, and I think it muddles the overall point.
> the singularity is a term borrowed from physics to describe a cataclysmic threshold in a black hole
In his article which popularized the idea of The Singularity, Vinge quotes Ulam paraphrasing von Neumann, and states, "Von Neumann even uses the term singularity". As von Neumann surely knew, "singularity" was a term widely used in mathematics well before the idea of black holes (etymonline dates first use to 1893). Vinge does not say anything about black holes.
> an object is pulled into the center [of] gravity of a black hole [until] it passes a point beyond which nothing about it, including information, can escape. [...] This disruption on the way to infinity is called a singular event – a singularity.
The point at which "nothing" can escape a black hole is the event horizon, not the singularity. What exactly happens to information and what exactly happens when crossing the event horizon are subjects of debate (see "black hole information paradox" and "AMPS/firewall paradox"); however, it's probably fair to say that the most orthodox/consensus views are that information is conserved through black-hole evaporation and that nothing dramatic happens to an observer passing through the event horizon.
> the singularity became a black hole, an impenetrable veil hiding our future from us. Ray Kurzweil, a legendary inventor and computer scientist, seized on this metaphor
While I'm not prepared to go into my personal views in this comment, it's worth noting that the idea that "exponential curves look the same from every point" is not foreign to, e.g., the Kurzweilian view of The Singularity; nevertheless, fitting dramatic, industrial-revolution-sized progress into the fixed scale of a (contemporary) human lifetime would surely be a big deal. This idea, (whether you believe it will happen or not), is obscured by the spurious black hole metaphor.
I don't really follow. I've lost touch and reconnected with people since the invention of cell phones, the internet, and social media. Sometimes you just don't talk to someone for years even if you know how to reach them.
I guess it's easier to find people now, especially if they have an online presence, but I think the experience of losing touch is still pretty much the same.
We will carry the memory of the old ways into the 22nd century, just as our grandparents did with their 19th century inclinations. I, for one, look forward to writing out a complete essay in cursive with a pencil as the grandkids stare in awe.
A mistake in this critique is it assumes an exponential: a constant proportional rate of growth. It is true that, in some sense, an exponential always seems to be accelerating while infinity always remains equally far away.
But this is a bit of a straw man. Mathematical models of the technological singularity [1], along with the history of human economic growth [2], are super-exponential: the rate of growth is itself increasing over time, or at least has taken multiple discrete leaps [3] at the transitions to agriculture and industry, respectively. A true singularity/infinity can of course never be achieved for physical reasons (limited stuff within the cubically-expanding lightcone, plus inherent limits to technology itself), but the growth curve can look hyperbolic and traverse many orders of magnitude before those physical limits are encountered.
It can’t be infinitely fast, but after the point where we all collectively cease to be able to comprehend the rate of change, it’s effectively a discontinuity from our point of view.
Cynical take: Kurzweil's predictions follow a predictable pattern which suggests something about how and why they are being generated.
Namely, it's whatever increasingly-improbable new advances and discoveries are needed to ensure achieve practical immortality is achieved just in time for a particular human named Ray Kurzweil to escape the icy grip of death.
If singularity premise is correct, then i think it should must already had happened in our cluster of the universe. Since it hasn't yet, then there are 3 options. 1st: earth and earthlings are special, which is too egocentric notion to be taken seriously. 2nd: we are being observed for entertainment by high conciousness. That could explain a lot, though this removes agency and prevents us from reaching that moment on our own (but maybe the observers are curious to find that out). 3rd: the extinction and annihilation of so-called intelligence once it obliterates all the resources in vicinity. Of course there is option number 4, but ultimately the question, what is the point of that?
This is basically a version of the Fermi Paradox, which, well, has a lot of different answers. A lot. One of my favorites is that humanity actually is one of the early tech civilizations, due to insufficient phosphorus in earlier phases of the universe. But also, there are a lot of ways to have a technological singularity, and many of them have no particular reason to be visible at astronomical distances.
But wanting not to talk to us might be a reason to make themselves hard to detect. Kind of moving to the quieter place when you can’t find anyone interesting at a party.
That's also my take. The universe is expected to last for trillions of years and only 13 billions years have passed. Life on earth evolved surprisingly fast after reaching the necessary conditions, some recent research on LUCA estimating less than 300 millions years. I believe there are others out there, but too far yet for communication.
This isn't right, the inflection point happens when computers/software can self-improve at a level where humans can't keep up. It isn't just that progress is continuously exponential, it's that tech becomes a magic box that spits out advances while even the smartest humans can only pray to it, like a (hopefully benevolent) god.
I see parallels with AGI/takeoff. "It's just 2 years away" every year. KK agues that the process is continuous, but AI optimists argue the inflection will be abrupt , like a step-function.
The premise of the singularity concept was always superhuman intelligence, so it’s not so much a parallel as a renaming of the same thing.
> In Vinge’s analysis, at some point not too far away, innovations in computer power would enable us to design computers more intelligent than we are, and these smarter computers could design computers yet smarter than themselves, and so on, the loop of computers-making-newer-computers accelerating very quickly towards unimaginable levels of intelligence.
Would never work in reality, you can't optimize algorithms beyond their computation complexity limits.
You can't multiply matrix x matrix (or vector x matrix) faster than O(N^2).
You can't iterate through array faster than O(N).
Search & sort are sub- or near-linear, yes - but any realistic numerical simulations are O(N^3) or worse. Computational chemistry algorithms can be as hard as O(N^7).
The n in this article is the size of each dimension of the matrix — N=n^2. Lowest known is O(N^1.175...). Most practical is O(N^1.403...). Naive is already O(N^1.5) which, you see, is less than O(N^2).
But that doesn't disprove the hypothesis that in principle you can have an effective self-improvement loop (my guess is that it would quickly turn into extremely limited gains that do not justify the expenditure).
Any such "self-improvement loop" would have a natural ceiling, though. From both algorithmic complexity and hardware limits of underlying compute substrate.
P.S. I am not arguing against, but rather agreeing with you.
The natural ceiling is the amount of compute per unit of energy. At the point you can no longer improve energy efficiency, you can still add more energy to operate more compute capacity.
At some point they’ll hit the speed of light as a limit to how quickly it can propagate its internal state to itself - as the brain grows larger, the mind slows down or breaks apart into smaller units that can work faster before rejoining the bigger entity and propagating its new state.
It is exactly the same metric. Intelligence is not magic, be it organic or LLM-based. You still need to go through the training set data to make the any useful extrapolations about the unknown inputs.
It is something that will or will not happen on an individual level.
You are fools to think you personally are a part of or will be present at the zenith of human ascendancy.
One, all, and the world will go on as though another day. Those who become or go beyond their “full self” will merely have a new level. Like a base conversion.
Besides, there are notes of singularities flitting in and out of your very minds. You get the the bottom of those and you will find whichever part is yours will come by your acquiring it for yourself.
The singularium will be your own place in the ascendency of Man, through technology or personal development. The self is the ultimate technology.
I have noticed a pattern where one thing that is not understood gets tied to another thing that is not understood, one perceived mystery to another, one unsolved problem to another, one misunderstanding with another, and then this is declared to be an answer.
Quantum mechanics and consciousness.
Pyramids and aliens.
Looking forward, it is a great opportunity for random mashup "explanations". The urge will be great for some people.
Quantum mechanics as understood is flawed, consciousness is universal potential subjectively bound to particulate, animated by living biotechnology, and squares of this day still refuse to wink at “magic.”
Pyramids are human engineering. “The Greys” are our Earth mates. America’s nuclear suicidal tendencies have revoked your right to deny. I speak only for the ascendency of Man.
Have fun flat landing stoic, I know you’re really a bleeding heart.
I think of it like time dilation, such as near a black hole(see what I did there, tying the two singularities together).
From the perspective of one experiencing time-dilation nothing appears unusual, everything appears normal, it only from the outside perspective that things are strange.
As far as I can tell the singularity happened in the late 1700's. For thousands of years the collective economic growth of the world was effectively a straight shallow line, it grew, but slowly and linearly, then in the late 1700's something changed, it went exponential and everybody was along for the ride, and from the perspective of being caught up in this exponential growth it appears flat, normal even. but you look at history and wonder why every advance was so slow. and you look ahead and say the singularity is almost there. But we will never actually reach it. by the time we get there it is the new normal.
Humanity has been rapidly advancing throughout recorded history. We just gloss over advancements in outdated technology. Who cares about when exactly the stirrup was invented when we have cars. Medieval armor was vastly better than what was available in the Roman Empire, but it didn’t just suddenly jump to better there was a host of minor innovations.
The amazing complexity in rigging seen in the age of sail is built on a long line of innovation, but engines rendered it largely irrelevant etc. As such ~1700 isn’t some clear tipping point, just the horizon before which innovation seems less relevant.
> As such ~1700 isn’t some clear tipping point, just the horizon before which innovation seems less relevant.
GP wrote "late 1700's". He's probably referring to the industrial revolution.
I think the classic definition of the singularity though (per Kurzweil's book) is precisely when the curve actually doesn't look flat on human comprehensible timescales any more.
I.e. One day there are significant overnight changes. Then the very next day hourly changes, soon thereafter every minute, second, millisecond, etc.
Same thing could be said about the period ~10,000BC (give or take a few thousand years).
Glacial economic growth for hundreds of thousands of years beforehand and then "something changed".
It's like how talking about the "hockey stick curve" in an exponential growth line is nonsensical. It's everywhere and the curve angles you see depend entirely on what scale you view it from.
I think the mistake here is that there is a certain rate of progress where humanity can no longer even collectively process the progress and it is equivalent to infinite progress. This point is the singularity and requires non-human driven progress. We may or may not reach that point but full automation is a requirement to reach it. We may hit a hard wall and devolve to an s-curve, hit a maximum linear progress rate, hit a progress rate bounded by population growth and human capability growth (a much slower exponential), or pass the 1/epsilon slope point where we throw up our hands (singularity). Or have a dark age where progress goes negative. Time will tell.
The problem with the concept of "the singularity" it is has a hidden assumption that computation has no relationship to energy. Which, once unmasked, is a pretty outlandish claim.
There is a popular illusion that somehow technological progress is a pure function of human ingenuity, and that the more efficient we can make technology the faster we can make even better technological improvement. But history of technology has always been the history of energy usage.
Prior to the emergence of homo-sapiens, "humans" learned to cook food by releasing energy stored in wood. Cooking food is often considered a prerequisite for the development of the massive, energy consuming, brain of homo-sapiens.
After that it took hundreds of thousands of years for Earth's climate to become stable enough to make agriculture feasible. We see almost no technological progress until we start harvesting enormous amounts of solar energy through farming. Not long after this we see the development of mathematics and writing since humans now had surplus energy and they could spend some of it on other things.
You can follow this pattern though the development and extraction of coal, oil etc. You can look at the advancement of technology in the last 100 years alongside our use of fossil fuels and expansion of energy capabilities with renewables (which historically only been used to supplement, not replace non-renewables).
But technological progress has always been a function of energy, and more specifically, going back to cooking food, computational/cognitive ability similarly demands increasingly high energy consumption.
All evidence seems to suggest that we increasingly need more energy for incrementally smaller return on computation.
So for something like the singularity to happen, we would also need incredible changes in available energy (there's also a more nuanced argument that you also need smooth energy gradients but that's more discussion than necessary). Computation is not going to rapidly expand without also requiring tremendously large increases in energy.
Further it's entirely reasonable that there is some practical limit to just how "smart" a thing can be based on the energy requirements to get there. That is, you can't reasonably harvest enough energy to create intelligence on the level we imagine (the same way there is a limit to how tall a mountain can be on earth due to gravity).
Like most mystical thinking, ignoring what we know about thermodynamics tends to be a fundamental axiom.
There are hard limits for how much energy we can provide to computation, but we are not even close to what we can do in a non-suicidal way. In addition to expanding renewables, we could also expand nuclear and start building Thorium reactors - this alone ensures at least an extra order of magnitude in capacity compared to Uranium.
As for the compute side, we are running inference on GPUs which are designed for training. There are enormous inefficiencies in data movement in these platforms.
If we play our cards right we might have autonomous robots mining lunar resources and building more autonomous robots so they can mine even more. If we manage to bootstrap a space industry on the Moon with primarily autonomous operations and full ISRU, we are on our way to build space datacenters that might actually be economically viable.
There is a lot of stuff that needs to happen before we have a Dyson ring or a Matrioska brain around the Sun, but we don’t need to break any laws of physics for that.
Don’t forget the practical ability to dissipate waste heat on top of producing energy. That’s an upper limit to all energy use unless we decide boiling ourselves is fine, or find a way to successfully ignore thermodynamics, as you say.
We can always build a sunshade in the Earth-Sun L1. Make it a Sun facing PV panel pointing radiators away from us and we can power a lot of compute there (useful life might be limited, but compute modules can be recycled and replaced, and nothing needs to be launched from Earth in this case).
If we'd ever get so far that would be the most compelling argument for datacenters in space
I think this is accurate. However, this does not mean that the exponential isn't real, it just isn't sudden. We have been living through continuously accelerating technological and economic growth our whole lives, and things really do happen much faster now than they did in the past.
For example it took centuries for indoor plumbing to be widely adopted, and less than a decade for smartphones. It took hundreds of thousands of years to get the first billion people (~1800), but the eighth billion happened in eleven years (2011-2022).
The initial part of an S-curve looks a lot like an exponential. The final part doesn't.
Finding the second and the third antibiotic for non resistant bacteria may be fast and easy, finding another three antibiotics for resistant bacteria decades later is now crazy hard, as bacteria evolved to resist everything that doesn't also kill humans.
Eh, sure, we'll hit limits eventually. We appear to be pretty far off from hard limits like thermodynamics though, and the world after we hit those limits could look very science-fiction.
For antibiotics specifically, we will probably find other ways to fight bacteria even if we never discover another chemical antibiotic. As one technology S-curves, another technology replaces it.
> As one technology S-curves, another technology replaces it
Even if for no other reason than us abandoning a diminishing returns approach looking for other alternatives.
We have been kind of at the end of the rope for silicon for quite some time now and we found increasingly heroic ways to protect our investment in silicon based semiconductors, but silicon is not the only option - it’s just the one we have a lot of supply chains already set up.
Population is a particularly good example: just decades ago it seemed like we were barreling toward an overpopulation crisis. I'm aware that some people think we're beyond the carrying capacity of the Earth long-term. But there seems to be a broad consensus that birth rates are declining everywhere without a food crisis or other immediately visible calamity.
Population appears to be on a droopy S curve. The preposterousness of those space data centers and the fact that we don't have a theory of consciousness makes it seem plausible that AI could also not continue to rocket ahead.
>The preposterousness of those space data centers and the fact that we don't have a theory of consciousness makes it seem plausible that AI could also not continue to rocket ahead.
The rate of datacenter construction in the last few years exceeds Moore's law and is almost certainly unsustainable. 'Only' 2x improvement every 2 years would seem relatively slow compared to what's happened recently.
However, I expect AI will continue to advance over the coming decades even once the bubble pops. They're clearly on to something with neural networks.
My example is lighting technology:
Wood fires were the only option for something like a few hundred thousand years.
Oil lamps for millennia.
Tallow or beeswax candles are modern technology, appearing after the fall of the Roman Empire.
Gas lighting was widespread for less than a century.
Incandescent lightbulbs for another century, but were starting to get replaced by fluorescent tubes just decades later.
Cold cathode fluorescents saw mainstream use for about two decades.
LEDs completely displaced almost all previous forms of lighting in less than a decade.
I recently read about a new form of lighting developed and commercialised in just a few of years: https://www.science.org/doi/10.1126/sciadv.adf3737
This is an excellent example to illustrate an S-curve. There is a certain amount of energy in a photon. It cannot be emitted with less energy. There is 100% efficiency barrier that cannot be surpassed no matter how smart you are.
I think we can stop building new streetlights at the moment we have full daylight illumination on the visible spectrum 24x7 in urban areas. We’ll probably settle for much less and be happy with that.
If we need more light, we can deploy more power generators.
The etymology and physical metaphor of "The Singularity" are a bit confused here, and I think it muddles the overall point.
> the singularity is a term borrowed from physics to describe a cataclysmic threshold in a black hole
In his article which popularized the idea of The Singularity, Vinge quotes Ulam paraphrasing von Neumann, and states, "Von Neumann even uses the term singularity". As von Neumann surely knew, "singularity" was a term widely used in mathematics well before the idea of black holes (etymonline dates first use to 1893). Vinge does not say anything about black holes.
> an object is pulled into the center [of] gravity of a black hole [until] it passes a point beyond which nothing about it, including information, can escape. [...] This disruption on the way to infinity is called a singular event – a singularity.
The point at which "nothing" can escape a black hole is the event horizon, not the singularity. What exactly happens to information and what exactly happens when crossing the event horizon are subjects of debate (see "black hole information paradox" and "AMPS/firewall paradox"); however, it's probably fair to say that the most orthodox/consensus views are that information is conserved through black-hole evaporation and that nothing dramatic happens to an observer passing through the event horizon.
> the singularity became a black hole, an impenetrable veil hiding our future from us. Ray Kurzweil, a legendary inventor and computer scientist, seized on this metaphor
While I'm not prepared to go into my personal views in this comment, it's worth noting that the idea that "exponential curves look the same from every point" is not foreign to, e.g., the Kurzweilian view of The Singularity; nevertheless, fitting dramatic, industrial-revolution-sized progress into the fixed scale of a (contemporary) human lifetime would surely be a big deal. This idea, (whether you believe it will happen or not), is obscured by the spurious black hole metaphor.
Here is an neat rabbit hole: https://orionsarm.com/eg-topic/45c68b98779ad
Someone pointed out to me a few days ago that mine is the last generation who were able to lose touch and reconnect after long periods of time.
In the days of rotary & pay telephones the loss of communication was possible.
That is no longer the case.
I don't really follow. I've lost touch and reconnected with people since the invention of cell phones, the internet, and social media. Sometimes you just don't talk to someone for years even if you know how to reach them.
I guess it's easier to find people now, especially if they have an online presence, but I think the experience of losing touch is still pretty much the same.
We will carry the memory of the old ways into the 22nd century, just as our grandparents did with their 19th century inclinations. I, for one, look forward to writing out a complete essay in cursive with a pencil as the grandkids stare in awe.
> I, for one, look forward to writing out a complete essay in cursive with a pencil
My tendinitis complained as I read this. It told me not to dare trying that.
A mistake in this critique is it assumes an exponential: a constant proportional rate of growth. It is true that, in some sense, an exponential always seems to be accelerating while infinity always remains equally far away.
But this is a bit of a straw man. Mathematical models of the technological singularity [1], along with the history of human economic growth [2], are super-exponential: the rate of growth is itself increasing over time, or at least has taken multiple discrete leaps [3] at the transitions to agriculture and industry, respectively. A true singularity/infinity can of course never be achieved for physical reasons (limited stuff within the cubically-expanding lightcone, plus inherent limits to technology itself), but the growth curve can look hyperbolic and traverse many orders of magnitude before those physical limits are encountered.
[1] https://www.nber.org/system/files/working_papers/w23928/w239...
[2] https://docs.google.com/document/d/1wcEPEb2mnZ9mtGlkv8lEtScU...
[3] https://mason.gmu.edu/~rhanson/longgrow.pdf
> A true singularity/infinity
It can’t be infinitely fast, but after the point where we all collectively cease to be able to comprehend the rate of change, it’s effectively a discontinuity from our point of view.
> the Kurzweilian version of singularity
Cynical take: Kurzweil's predictions follow a predictable pattern which suggests something about how and why they are being generated.
Namely, it's whatever increasingly-improbable new advances and discoveries are needed to ensure achieve practical immortality is achieved just in time for a particular human named Ray Kurzweil to escape the icy grip of death.
And we are running out of time
He might not escape, but my grandson stands a reasonably good chance at this time.
If singularity premise is correct, then i think it should must already had happened in our cluster of the universe. Since it hasn't yet, then there are 3 options. 1st: earth and earthlings are special, which is too egocentric notion to be taken seriously. 2nd: we are being observed for entertainment by high conciousness. That could explain a lot, though this removes agency and prevents us from reaching that moment on our own (but maybe the observers are curious to find that out). 3rd: the extinction and annihilation of so-called intelligence once it obliterates all the resources in vicinity. Of course there is option number 4, but ultimately the question, what is the point of that?
This is basically a version of the Fermi Paradox, which, well, has a lot of different answers. A lot. One of my favorites is that humanity actually is one of the early tech civilizations, due to insufficient phosphorus in earlier phases of the universe. But also, there are a lot of ways to have a technological singularity, and many of them have no particular reason to be visible at astronomical distances.
> and many of them have no particular reason to be visible at astronomical distances.
Or to want to talk to meat.
https://youtu.be/T6JFTmQCFHg
...sure. To be clear, ETs wanting to talk to us is not necessarily a prerequisite to being detectable by us, which is the more relevant question.
But wanting not to talk to us might be a reason to make themselves hard to detect. Kind of moving to the quieter place when you can’t find anyone interesting at a party.
That's also my take. The universe is expected to last for trillions of years and only 13 billions years have passed. Life on earth evolved surprisingly fast after reaching the necessary conditions, some recent research on LUCA estimating less than 300 millions years. I believe there are others out there, but too far yet for communication.
https://youtu.be/h6fcK_fRYaI
That's your answer? "Next time"?
This isn't right, the inflection point happens when computers/software can self-improve at a level where humans can't keep up. It isn't just that progress is continuously exponential, it's that tech becomes a magic box that spits out advances while even the smartest humans can only pray to it, like a (hopefully benevolent) god.
[dead]
I see parallels with AGI/takeoff. "It's just 2 years away" every year. KK agues that the process is continuous, but AI optimists argue the inflection will be abrupt , like a step-function.
The premise of the singularity concept was always superhuman intelligence, so it’s not so much a parallel as a renaming of the same thing.
> In Vinge’s analysis, at some point not too far away, innovations in computer power would enable us to design computers more intelligent than we are, and these smarter computers could design computers yet smarter than themselves, and so on, the loop of computers-making-newer-computers accelerating very quickly towards unimaginable levels of intelligence.
Would never work in reality, you can't optimize algorithms beyond their computation complexity limits.
You can't multiply matrix x matrix (or vector x matrix) faster than O(N^2).
You can't iterate through array faster than O(N).
Search & sort are sub- or near-linear, yes - but any realistic numerical simulations are O(N^3) or worse. Computational chemistry algorithms can be as hard as O(N^7).
And that's all in P class, not even NP.
https://en.wikipedia.org/wiki/Computational_complexity_of_ma...
The n in this article is the size of each dimension of the matrix — N=n^2. Lowest known is O(N^1.175...). Most practical is O(N^1.403...). Naive is already O(N^1.5) which, you see, is less than O(N^2).
Well, but still superlinear.
We don't need to optimize algorithms beyond their computational complexity limits to improve hardware.
Hardware is bound by even harder limits (transistor's gate thickness, speed of light, Amdahl's law, Landauer's limit and so on).
But that doesn't disprove the hypothesis that in principle you can have an effective self-improvement loop (my guess is that it would quickly turn into extremely limited gains that do not justify the expenditure).
Any such "self-improvement loop" would have a natural ceiling, though. From both algorithmic complexity and hardware limits of underlying compute substrate.
P.S. I am not arguing against, but rather agreeing with you.
The natural ceiling is the amount of compute per unit of energy. At the point you can no longer improve energy efficiency, you can still add more energy to operate more compute capacity.
Which hints that truly superintelligent AIs will consume vast amount of energy to operate and matter to build.
At some point they’ll hit the speed of light as a limit to how quickly it can propagate its internal state to itself - as the brain grows larger, the mind slows down or breaks apart into smaller units that can work faster before rejoining the bigger entity and propagating its new state.
Must feel really strange.
You're measuring speed not intelligence. It's a different metric.
It is exactly the same metric. Intelligence is not magic, be it organic or LLM-based. You still need to go through the training set data to make the any useful extrapolations about the unknown inputs.
vaguely related
https://bigthink.com/guest-thinkers/ray-kurzweil-the-six-epo...
A good reminder that every technological exponent is a sigmoid.
Until you change the dominant technology. At some point we ditched germanium transistors for silicon ones because the latter switched faster.
It is something that will or will not happen on an individual level.
You are fools to think you personally are a part of or will be present at the zenith of human ascendancy.
One, all, and the world will go on as though another day. Those who become or go beyond their “full self” will merely have a new level. Like a base conversion.
Besides, there are notes of singularities flitting in and out of your very minds. You get the the bottom of those and you will find whichever part is yours will come by your acquiring it for yourself.
The singularium will be your own place in the ascendency of Man, through technology or personal development. The self is the ultimate technology.
I have noticed a pattern where one thing that is not understood gets tied to another thing that is not understood, one perceived mystery to another, one unsolved problem to another, one misunderstanding with another, and then this is declared to be an answer.
Quantum mechanics and consciousness.
Pyramids and aliens.
Looking forward, it is a great opportunity for random mashup "explanations". The urge will be great for some people.
Too true.
Quantum mechanics as understood is flawed, consciousness is universal potential subjectively bound to particulate, animated by living biotechnology, and squares of this day still refuse to wink at “magic.”
Pyramids are human engineering. “The Greys” are our Earth mates. America’s nuclear suicidal tendencies have revoked your right to deny. I speak only for the ascendency of Man.
Have fun flat landing stoic, I know you’re really a bleeding heart.