That's incredible, though nothing is free or limitless, we unfortunately live in a closed system *sniff*. Every resource will eventually run out. But as Einstein also said, energy cannot be destroyed, though neither can it be created. It can only change between forms, and I imagine that's what the interaction between matter anti-matter does. Apparently, if you explode a nuke in a box, and then you weigh that box, its weight will be exactly the same as it was before the explosion. Doesn't that mean that the universe... can theoretically run forever? I know that someday everything will decay and the bubble that is the universe will collapse on itself. I'm pretty sure I read a theory somewhere which said that the universe will collapse right back into that super hot blob of mass it was before the Big Bang at some point. And then the Big Bang will inevitably happen again, giving birth to another universe. If this is the case, and this can happen an infinite amount of times, then there will be a time when the Big Bang will lead to the exact same line of events that allow for our all births to take place. Sure, it might take a quintillion times... triple that. But infinite means that eventually, it will happen. In other words, it's possible that we've all died... and were then born again. Multiple times. That's just... mind-blowing.
But is it useful? It might be if I can find a way to quantum-store the results of horse races in such a way that they are emailed to my phone in a quintillion years' time after the next Big Bang... and in such a way that I believe me.
I can guarantee you that this won't happen. Either: a) It's scientifically impossible. b) It's possible, but the above theory doesn't apply, and infinite rebirth doesn't happen. c) Both things are possible but at some timeline after a big mess happened some smart people rigged it in a big masterplan so it doesn't happen again. d) It's possible but humanity is intellectually incapable of developing such technologies before it goes extinct. Going back to the possibility... I'm not a quantum physicist but isn't quantum physics about physical laws down to atomic levels? Atoms have states, including superposition. If atoms decay over time, and their state is lost, how are you gonna quantum-lock it?
No, because those rules don't apply inside black holes, and eventually everything will fall into a black hole. Then the black holes themselves will dissipate away via Hawking radiation until the universe is just basically completely empty. I'm pretty sure I'm remembering that correctly...
Which is why I hope there is a finite multiverse and ways to travel between universes. The end of life scares me and makes a part of me think grander things such as the continuity of civilization is ultimately pointless. I love humanity, and I would love for us to exist forever! The reason I hope there is a finite multiverse is because an infinite multiverse scares me more than anything. Imagine, anything possible, your worst nightmares and your best dreams, all real.
There's a short story by Ted Chiang where a woman breaks mathematics, and when she looks at her mathematical proof and sees this and realizes that her entire life and career based on nothing, she kills herself. That's it. It's only a few pages long, but the idea was profound. We can't live under a system that isn't stable. Or I think that was its message. I suppose the same thing would apply to a random universe. (Now I'm second guessing myself and think the story is by Ken Liu. I was reading both authors at the time, and though I know exactly where I was when I read it, I can't remember who wrote the story.)
The implication of recent research is that dark matter and black holes are one and the same. That black holes account for all the dark matter in the Universe. https://news.yale.edu/2021/12/16/black-holes-and-dark-matter-are-they-one-and-same#:~:text=Primordial%20black%20holes%20created%20in,dark%20matter%20in%20the%20universe.
The expansion vs collapse theory changes with the publication of each quarterly journal. Last I heard the latest "consensus" is a universe that will expand forever. Ditto with the dark matter ratio. The problem with cosmological distances is the margin for error is like 40%... more than enough room to accomodate any theory. Wait five years and they'll have a completely new interpretation.
We know how the world will end, but not the Universe. There's recent research that supports alternative endings. It all has to do with the struggle between the momentum of its expansion and the pull of gravity. If gravity is greater, then, yes, the expansion of the Universe reverses, leading to The Big Crunch. But scientists are finding out that dark energy may make the difference. If it causes the expansion to continue, the Universe will spread out thin, ending in Heat Death or The Big Freeze. If dark energy actually causes the expansion to accelerate, we're in for The Big Rip. https://www.nature.com/articles/d41586-020-02338-w
We are a complicated collection of interconnected clocks. Breaths, heart beat, electrical impulses, nervous system rhythms, feedback chemistry. It helps us experience time. Can artificial intelligence experience time?
Is this still science thread or is it now philosophy debate thread? A cheap internet router can experience time.
Not like we do. Maybe I meant perceive it. Understand it as a flowing thing. Store memories and predict the future. Partly I ask because I know AI doesn't grasp cause and effect. They don't understand causation. They can relate events A and B, but can't see that A makes B happen. If you don't understand causation, can you understand time? I'm reading a book right now by physicist Sean Carroll, entitled From Eternity to Here. It's all about time, as studied by physicists. That is the kind of time I mean.
I don't think people perceive or understand time at all. We just count it and when we feel gnawing sensations that we might have forgotten a deadline we send out some UTC packets - I mean: look at a clock. My router's log is better than my memory. It can't predict the future but nor can I (especially in science thread). And it performs its function and knows all the firewall rules it has to follow.
Another reason I ask is because I want to write Sci-Fi. How human-like can you make an AI character? You're right there - we don't understand time. But there are a lot of theories about it. John Archibald Wheeler, an American physicist who coined the term black hole, was once asked how he would define "time". He thought about it a bit, then answered: "Time in Nature's way of keeping everything from happening at once."
Tomorrow's Eve - the AI's whole story-ordeal, her whole problem, is that she's just like a human woman Blade Runner - more human than human Ray Bradbury's 'The City' - does it have to be human-like or can its being alien-like be interesting? Ghost in the Shell 2 - you made the AI human-like? How could you? The poor AI! (The respective attitudes of these 4 AIs to time is an important theme in their stories, imo - very different treatments of potential issues*) And that's what I think the AI in sci-fi really is: they've been writing it since long before any AI was scientifically-discovered, and the technological advancements have had little impact on how it's written. Sci-fi is about what-ifs, and AI is a way to explore 'what if people worked differently?' * briefly (iirc) Spoiler Hadaly in Tomorrow's Eve has to be switched off because she'll always be beautiful Blade Runner switches it round so they have less time than us The City uses its infinite patience to exact revenge on its makers' killers and the pleasure-dolls in GitS2 are murderous as an expression of the mortality of the girl they are based on (I am angry because I am not an immortal doll, so if you make a doll of me it will be an angry doll)
The future of AI is building computer systems that innately grasp three basic concepts: time, space and causality. At the moment AI systems only understand time as an implicit construct, best example is to program a watch to output time relevant to a clock. But an AI has no way of understanding the concept of time as you or I would. To make the next break-through in achieving human-level artificial general intelligence (AGI) these three core concepts must be understood by the machine. Then we are off to the races... When will singularity happen? 995 experts’ opinions on AGI (aimultiple.com) The above link is a great dive into the subject and worth a look. A survey of AI experts asked on AGI timing suggested by 2060 and highly likely by 2075. That’s not far away at all…
That article at the link was extremely interesting and helpful. Thank you very much. So it looks like we can expect singularity before the end of the century. Let's hope the AI builders follow Asimov's Laws of Robotics and program benevolence into them.
The argument against AGI happening is also interesting. Intelligence is multi-dimensional and how the brain ‘cuts corners’ to make best guess solutions is remarkable. Applying this to AGI seems somewhat far off indeed. The three basic core functions of time, space and causality I can envisage happening. Getting the AI to understand abstract concepts is what will make them human like (the AI God question in my thread). Asimov’s three laws would work in early stages of AGI, but become more of a moral compass the machine guides itself by in later development. That’s scary...
Yes, it is. Especially if as time goes on we rely on AI more and more to make decisions for us. Here's something I read that reads like a great novel plot: Might AI replace our moral decision-making capacities? In many cases, yes, it may. We already modify human behavior through law, government, and culture. Specialized citizens make and enforce laws so as to promote or suppress certain behaviors. Therefore we typically have to think less about our moral choices than would, say, people living in anarchy. AI would merely extend this power to even more of life and with even more control. And as computers do more, humans will do less – humanity will turn over vast areas of decision-making to automated systems, and we will lose skill at those tasks. https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/artificial-intelligence-decision-making-and-moral-deskilling/#:~:text=In%20many%20cases%2C%20yes%2C%20it,law%2C%20government%2C%20and%20culture.