AI News, while True: learn()

while True: learn()

Forbes writes that these developers and Data Scientists have incredible salaries, Elon Musk says that they can accidentally create an AI that can enslave humanity.

Meet while True: learn()

Thanks to you we now know what shall we do if a zombie apocalypse will break down, so you should realize that games can save our lives one day… But I got away from the topic I wanted to discuss.

We knew that we could not wait for a few more years because we were crazily curious about what would the players think of useful games, would such projects be in demand, would people want to show them on Youtube and Twitch and would they learn something new while playing.

Would there be a player, who would get inspired by a useful game so much that he would become a doctor of science and invent a huge planetary laser or resurrect dinosaurs?

PR guy told us to write it like so: “Machine learning is everywhere around these days, everyone is writing about it, but unfortunately not everyone out there has time and special knowledge to get involved in this topic easily.

Who knows, maybe 10 years after today it will be enough to show you’ve reached 90th level in the programming game to get a job in Valve.

the game will allow you to rent servers of different types from different suppliers and players will be able to launch their start-ups in order to lose all their money… oh, I’m being told here that start-ups are started to make money.

Anticipating your question about the bitcoins — yes, they can be mined, even during the whole day, you’re welcome… you’ve missed an opportunity to get rich in life but you can at least do this in the game.

But please remember that if someone of you will build up Skynet or something like this we will have to add the fourth era, so think twice before you will turn your PC on, since our UI is made to fit only 3 eras in, and we do not want to add 4th in there.

You know, we felt like we were imposing our community different activities to simply say: “hey, we are still here, making this game, don’t forget about us”.

It may be that I, the author of these lines, am generally the only one who reads this.) Seriously, if you read this, let me know, leave a comment somewhere or send me an e-mail to oleg.chumakov@luden.io or shout out to me on twitter @gamescodedogs).

How are you planning on involving the Community in your development process?

Note: This Early Access game is not complete and may or may not change further.

The AI Revolution: Our Immortality or Extinction

We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems.

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11← open these Before we dive into things, let’s remind ourselves what it would mean for a machine to be superintelligent.

Often, someone’s first thought when they imagine a super-smart computer is one that’s as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

What makes humans so much more intellectually capable than chimps isn’t a difference in thinking speed—it’s that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps’

Speeding up a chimp’s brain by thousands of times wouldn’t bring him to our level—even with a decade’s time, he wouldn’t be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours.

But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans.

And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves.

In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by.

Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that’s here on the staircase (or maybe a million times higher):

And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us.

Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it’s capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:

So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it’s only a matter of time before some other species, some gust of nature’s wind, or a sudden beam-shaking asteroid knocks it off.

And while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality.

If Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb: 1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.

Those people subscribe to the belief that this is happening soon—that exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress.

The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years.

So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI.

Müller and Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad.

Some reasons most people aren’t really thinking about this topic: One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you’re just standing on the intersection of the two dotted lines in the square above, totally uncertain.

During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people’s opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:

The thing that separates these people from the other thinkers we’ll discuss later isn’t their lust for the happy side of the beam—it’s their confidence that that’s the side we’re going to land on.

We’ll cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and let’s take a good hard look at what’s over there on the fun side of the balance beam—and try to absorb the fact that the things you’re reading might really happen.

If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to him—we have to be humble enough to acknowledge that it’s possible that an equally inconceivable transformation could be in our future.

He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition.

He’s well-known for his bold predictions and has a pretty good record of having them come true—including his prediction in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon.

His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweil’s timeline.

Before we move on—nanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it— What AI Could Do For Us Armed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity.

Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation.

ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.

few months ago, I mentioned my envy of more advanced potential civilizations who had conquered their own mortality, never considering that I might later write a post that genuinely made me believe that this is something humans could do within my lifetime.

If we live long enough to reproduce and raise our children to an age that they can fend for themselves, that’s enough for evolution—from an evolutionary point of view, the species can thrive with a 30+ year lifespan, so there’s no reason mutations toward unusually long life would have been favored in the natural selection process.

Kurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body.

that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old.10 Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.).

Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all.

He even gets to the brain and believes we’ll enhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.

Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities.

Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.

Eventually, Kurzweil believes humans will reach a point when they’re entirely artificial;11 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that;

a time the AI Revolution could bring to an end with the merging of humans and AI.12 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam.

Others have questioned his optimistic timeline, or his level of understanding of the brain and body, or his application of the patterns of Moore’s law, which are normally applied to advances in hardware, to a broad range of things, including software.

A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

Being in the middle of the chart doesn’t mean that you think the arrival of ASI will be neutral—the neutrals were given a camp of their own—it means you think both the extremely good and extremely bad outcomes are plausible but that you’re not sure yet which one of them it’ll be.

it’s permanent) and it’s devastating or death-inducing in its consequences.14 It technically includes a situation in which all humans are permanently in a state of suffering or torture, but again, we’re usually talking about extinction.

There are three things that can cause humans an existential catastrophe: 1) Nature—a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc.

This would definitely be bad—but in these scenarios, most experts aren’t worried about ASI’s human creators doing bad things with their ASI, they’re worried that the creators will have been rushing to make the first ASI and doing so without careful thought, and would thus lose control of it.

Experts do think a malicious human agent could do horrific damage with an ASI working for it, but they don’t seem to think this scenario is the likely one to kill us all, because they believe bad humans would have the same problems containing an ASI that good humans would have.

It would just happen because it was specifically programmed that way—like an ANI system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people.

The existential crisis would happen if the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an ASI ruling the world whose core drive in life is to murder humans.

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples.

She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat.

It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of.

But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans.

To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.

If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.

So a supersmart spider would probably be extremely dangerous to us, not because it would be immoral or evil—it wouldn’t be—but because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.

She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her).

Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.

The jury’s out on which one will prove correct when the world sees its first AGI, but Bostrom, who admits he doesn’t know when we’ll get to AGI, believes that whenever we do, a fast takeoff is the most likely scenario (for reasons we discussed in Part 1, like a recursive self-improvement intelligence explosion).

But before Turry’s takeoff, when she wasn’t yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly.

She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting exactly how the discussion among Robotica’s team would play out and knowing they’d end up giving her the connection.

Once on the internet, Turry unleashed a flurry of plans, which included hacking into servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan—things like delivering certain DNA strands to carefully-chosen DNA-synthesis labs to begin the self-construction of self-replicating nanobots with pre-loaded instructions and directing electricity to a number of projects of hers in a way she knew would go undetected.

Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places.

For example, what if we try to align an AI system’s values with our own and give it the goal, “Make people happy”?19 Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people’s brains and stimulating their pleasure centers.

Even letting go of the fact that the world’s humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity.

And we can’t just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don’t require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored.

There’s also no way to gauge what’s happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.

The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go.

The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.20 And when you’re sprinting as fast as you can, there’s not much time to stop and ponder the dangers.

On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just “get the AI to work.”

Bostrom calls this a decisive strategic advantage, which would allow the world’s first ASI to become what’s called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.

If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.21 It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed.

But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it’s very likely that an Unfriendly ASI like Turry emerges as the singleton and we’ll be treated to an existential catastrophe.

Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.

Let's Try While True: Learn() - Machine Learning

Watch more Let's Trys like this one! Support what I do by subscribing on ..

while True: learn() учиться никогда не поздно

#WhileTrueGame About the Game It's never too late to learn Everyone is speaking about ..

While True: Learn() - BASICS & FEATURES - Gameplay & Walkthrough

Hello everyone! While True: learn() - is a simulator of a machine learning specialist who uses visual programming to make his and his cat's living. Make money ...

While True: Learn() | Improving Our Skills | Ep03

We're slowing getting the hang of the game! Feel free to leave your suggestions and ideas down in the comments on how we can improve! ▻ Try out the game ...

Let's Play while True:learn() [E05] Is This The End?

Watch as Karhu Plays Luden.io's upcoming game while True: learn(), where he takes on the role of a machine learning programmer. Dev's site: ...

Let's Play while True:learn() [E03] Alien Identification

Watch as Karhu Plays Luden.io's upcoming game while True: learn(), where he takes on the role of a machine learning programmer. Dev's site: ...

Let's Play while True:learn() [E02] Genetic Memory

Watch as Karhu Plays Luden.io's upcoming game while True: learn(), where he takes on the role of a machine learning programmer. Dev's site: ...

Let's Play while True:learn() [E04] Proof I Am Not A Programmer

Watch as Karhu Plays Luden.io's upcoming game while True: learn(), where he takes on the role of a machine learning programmer. Dev's site: ...

Let's Play while True:learn() [E01] Machines Can Learn?

Watch as Karhu Plays Luden.io's upcoming game while True: learn(), where he takes on the role of a machine learning programmer. Dev's site: ...

Let's Play while True: learn() | Stock Holder - Part 02

Let's Play while True: learn(), where we can now buy stocks in small startups! Of course this somehow means we're now working for them...? I'm playing this ...