AI News, 11 Artificial Intelligence Movies You'll Definitely Love To Watch artificial intelligence
Backreaction: The Real Problems with Artificial Intelligence
by a self-driving Tesla in Las Vegas ahead of CES Accident occurred on Paradise Rd in Las Vegas as engineers transported bots One of the Promobots stepped out of line and into the roadway, where it was hit Tesla Model S was operating autonomously, though a passenger was on board https://www.dailymail.co.uk/sciencetech/article-6566655/Oops-Autonomous-robot-struck-killed-self-driving-Tesla-Las-Vegas-ahead-CES.html
It was amazing to watch but the computer didn't understand the answers and sometimes got them wrong, and didn't know it had won.AI with never develope thoughts, feelings, or ambitions - I think that is what the people you mention are worried about.
I find it important that the author raises social inequality as an immediate problem.Apart from that I would like to share an example about how AI contributes in the case of protein folding:https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins
"there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI."Worse, every prejudice available is already burned into the AI by way of the training material.
Second, what good is a human level intelligence for if there is cheap human labor ready for hire.What would make real difference, if we had an artificial intelligence that is far more intelligent than a human.
An individual human brain may be limited in that, however humanity as a collective has already gone far beyond the progress of what an individual brain could achieve.
Using technology and communication as tools, humanity has discovered and understood complex systems the way no individual brain could understand.Communication enables research groups to divide up the work, so that different people work on different sub-problems, without having the need for a single person to have to understand every aspect of all problems.Technology enabled us to comprehend very complicated systems, that are far beyond human capability, like Lattice QCD, the analysis of the CMB, weather patterns, etc., all enabled by computers, that are used as the extension of the human brain.What we are doing now is a very efficient use of a combination of cheap human intelligence and state of the art technology.
One recent crash was due to sun glare on a traffic sign.How would a self driving car solve the "runaway trolley problem"?If it keeps going straight it will kill 5 people.
Bahle "The problem is simple: if a machine was not told to do anything but was simply given a few algorithms and lots of data as a basis, who is to blame if something goes wrong?"This really is no problem at all.
In the end, someone has to keep an eye on the functioning of the car and the traffic.And if we switch to completely autonomous cars, a completely new job has been predicted: Remote car driver (cf, drone pilots).
(IIRC, while the computer chess champion was programmed to play chess, the computer go champion was programmed to learn, and then learned go.) For that matter, just normal calculations: we are hopelessly inadequate.
(OK, this isn't normally considered part of intelligence, but we must be careful to disqualify something from belonging to intelligence just because a computer can do it.) What happens when computers learn to design AI?
This is not just Moore's law, which makes all computing faster (though it will presumably break down at some point).It would take a human centuries to do the calculations of just a simple program which runs for a few minutes.
Once someone writes a program which designs AI, then this can design a better AI, really fast, just like a computer can calculate in a few minutes what would take a human a lifetime.
That is relevant insofar that some of the possible problems to come with AI will get only crucial when it's real AI.There are some prominent people who state that today's AI is not "intelligent"
You may want to read for example here: https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/Looking at the brain, one can see that today's AI approaches rely on a overly simplified model of biological brains as seen in the 60ies of the past century.
Much of what we know today about real brains and what is essential for their effective functioning is missing in AI.I want to mention two examples: Real brains are not just networks of neurons who exchange electrical impulses via synapses.
Real brains are profundly chemical devices and the dozens of neurochemicals add a vast amount of additional degrees of freedom to the brain's state space.
From modern affective neuroscience we know that even very tiny, locally applied changes to the neurochemistry can have a dramatic effect on the brain's functioning.A second example is the topology.
Closed loops are ubiquitous in the brain and the phylogenetic developmental layers of the brain add even more nested, recursive processing: The brain builds representations of representations of representations ...
An illustrative example for top-level self-referential operation is vision: When you cannot move (not even your eyes) you loose your capability of vision within a short time and you drift into weird haluzinations.
The reason for this is, that the brain uses a sensori-motor feedback configuration: It brings together how visual stimuli change due to motor actions it initiated.
Without this sensori-motor closed-loop, the brain cannot construct a stable visual representation.I am pretty sure that the possibilities of AI are and remain very limited as long as they will not be based on fundamentally different models.
The assumption seems to be that reason is an attribute of the soul, that having a soul means you freely will your own goals, that therefore an artificial soul in computers will have ungodly traits, because Men Are Not Gods.
How many people care about creating hardware designed to operate for a hundred year?Artificial Intelligence, if it ever happen, is so unknown that only the most general (and therefore useless) statements can be made about it.
planning tasks than any human beings.What I am worried about is the situation when human beings subject to greed and hatred get to define greedy and hateful goals for AIs.
It will just do that with no mercy and no regret.I'd rather have a benevolent AI controlling greedy and hateful human beings than having greedy, hateful human beings controlling malicious AIs.But the latter is just what is going to happen when the development of AIs is left to evolve by market forces, given the state this world is in.
If you've read science fiction from the 50s, and popular science literature from around that time, it was assumed that computers would be large, expensive, rare, be owned only by governments, universities, large corporations, etc., and that access to them would be limited to a privileged few.
Re who gets to ask questions:I think one of the world’s leading cancer institutes - Memorial Sloan Kettering, in the Big Apple - uses Watson to assist oncologists with diagnosis and treatment options (also research);
It turns out the future we actually live in is one of networks of microcomputers, something that many really had a hard time to see coming ("There is no reason for any individual to have a computer in his home", one CEO famously said).
A worthy project would be an AI trained to look for correlations in different sets of data (do power lines cause cancer, is social program A really working, etc., etc.This is something people are bad at, even physicists!An international collaboration like CERN (!) could be be formed so everyone shares the expense and the access and results.
For example, it takes decades to train a human radiologist, who then has a potential career of a few more decades, but AI radiologists are already competitive or superior in several domains, and once you have trained one, you have essentially trained them all.
Once the AI equivalent is trained, a few milliseconds is sufficient to transfer it to any other computer with sufficient processing power, and such computers can be manufactured for a few thousand dollars.In any case, the real processing power today doesn't reside in some box under somebody's desk, but in the internet, and a thousand or a million cpu can go belly up without changing this.All that said, I loved Arun's story about bot on bot violence.
Awesome for diagnosing stroke, or eye pathologies, etc, perhaps awesome for other commercial purposes, but still just tools that do as they are directed.It is not precisely true that we cannot figure out why trained neural nets are doing what they are doing, or how they are doing it.
This can reveal whether inputs matter, how much, and often analysis reveals the unexpected relationships found by the net.I am also not convinced by the article that a neural net could define another neural net.
Programmers of AlphaZero had to decide what the inputs and outputs would be, how to segregate the inputs (if they used a divide-and-conquer approach), how the layers would by sized and interact, the activation function(s), and finally what the output would be and how to interpret it.
All that requires a human understanding of how to formulate the problem to be solved.I doubt the humans disappear in the next 50 years, I don't think anybody knows how to formulate for a neural net the problem of understanding how to formulate problems for a neural net.
Meaning, I don't know how to start on a general intelligence neural net, that could automatically search for and read online literature about playing game X and then produce a net that learns to play game X.
This principle is put forth to explain how a car control AI will surpass the abilities of a human driver when all the experiences of every move that a computer controlled car makes from all over the world will allow the program to handle every possible condition that could ever happen.
All experimental results could be validated and then encoded in a global world wide all inclusive statistical database that holds the sum of all discovered experimental experience.This process would avoid a problem that I have seen in science where the same results are discovered over and over again by experimenters that have no idea about the details of what has been turned up in the past.For example, I have seen the results produced by a chemist that has found a way to produce metallic crystals of hydrogen using Rydberg blockade that produce muons when irradiated by UV light.
This experience might be interesting to particle physics if they had access to the data and believed it since the experimental results were peer reviewed, replicated, validated, and universally accepted.
You’ll notice that the people (by which I mean corporations, ahem) most interested in advancing AI aren’t concerned about keeping the techniques to generate it.
Some facts that do not get mentioned by the AI PR hype machinere that the very best Chess players can now beat the best chesscomputer program's (expert chess player plus ability to use computersto evaluate positions offline).
To the best of my knowledge, this is the unvarnished state of the art in machine learning (note I avoid the use of the word "artificial intelligence"):Machine learning is good at well defined tasks for which it is possible to prepare a training sequeuence from a known database, such as:1.
Wouldn't it be nice to just load all you laundry at once: bath rugs, red silk blouse, cat vomit rag, husband's navy blue work shirts, child's expensive jeans that can shrink, into one giant load and have an intelligent washer/dryer deal with it?
likely came with early inter-city commerce when a tally was needed (and usually just some notches on a stick.) People, though, have probably created poetry for millennia -- math is just a recent novelty.
We aren't going to suit up autonomous car passengers like fighter jet pilots on the off chance of saving one orphan with a good sob story at the expense of killing two boring former Dancing With The Stars contestants.Granted, the idea has a lot of attraction to wealthy technical types who have realized that they are going to die.
And computers will be large machines requiring banks of memory, and only governments and large corporations will be able to afford them, or have the expertise necessary to run them...
At that time the accepted metric for machine intelligence was the "Turing Test", that a human could engage in a conversation via a Teletype machine and was unable to discern whether he was talking to a machine or a person.
How do you prevent that limited access to AI increases inequality, both within nations and between nations?dlb wrote: if it ever happen, is so unknown that only the most general (and therefore useless) statements can be made about itWe already know enough to make some useful statements.
So would it be ethical to kill someone in the waiting room and harvest their organs?Someone has pointed out that delaying AI in self-driving cars and so on will kill far, far, far, far more people than actual trolley-problem situations ever would.
(Famously, Univac once correctly predicted that Eisenhower would win, which no-one believed, so the prediction was held back to avoid embarrassing those working with Univac.) As some pundit remarked, you know that you are reading old science fiction when, as future time goes on, computers get bigger and bigger instead of smaller and smaller.Asimov of course also wrote much about robots, which were of human intelligence with a "positronic brain"
positrons were new at the time, so he adopted the term to sound cool.) When I was reading Max's Life 3.0, I noticed that many of the moral questions had already been discussed in Asimov's fiction more than half a century ago.It turns out the future we actually live in is one of networks of microcomputers, something that many really had a hard time to see coming ("There is no reason for any individual to have a computer in his home", one CEO famously said).That was Ken Olsen, CEO of Digital Equipment Corporation (see the link above to see how this relates to Multivac).
Markus wrote: To me, this fear of AI is just ridiculousI suspect that people find it easier to talk about the potential problems with AI than tackle the long list of real problems we're facing right now and the near future.
more than the other way around.Abstract:With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour.
You are right in that any such decision would have to factor in political decisions and regulations that are enforced for this reason, but this does nothing to negate my point that the question of ethics will be settled by market forces - within the legal bounds of course.
In this case it is possible that the AI system makes the (logical but most would agree least desirable) conclusion that the optimum outcome is to run over the pregnant mother."Then it would be a very stupid idea if it thinks that killing a few people in car crashes is a solution to the problem of overpopulation.
If it were really smart enough to think about overpopulation, a better strategy might be to aim for the Vatican and keep killing the Pope until one is elected who doesn't pronounce that birth control is a sin.
:-|"the current problem with full AI - there is no way to address moral issues"Yes, but holding back on AI for self-driving cars causes far, far more deaths than the occasional AI-caused death (which is a choice between killing fewer and killing more, or whatever).
But what about the moral issue of the human driver ("the nut behind the wheel", as old discussions of automobile safety put it)?
(specific) AI is already taking decisions without human interference (like buying and selling at the stock market), or advising humans (drones in army, analyzing medical data), where it is conceivable that the decisions will be shifted to specific AI in the future.
But, what the heck, we will to some extent trust or distrust human rulers and experts because of double agendas, so it is likely that also with AI ethical / trust / control issues will likely remain present for ever.
You have to set up and configure your car just like you have to set up your computer or cell phone.In the simulator, you are given, say, fifty scenarios (all of which the car must be capable of distinguishing from its sensory data).
And of course, you can reconfigure whenever you like.The AI that does this can be trained to generalize from these inputs like any other AI, a long process, to be sure, but once it is trained the actual decisions is just a lot of dot-products that can be formulated in microseconds, in real-time, far faster than any human reaction time.
In court, if necessary, prosecution or defense can show how the actual accident situation is most similar to one of the training scenarios, or a combination of them, and what the owner decided, and let a human jury decide on how closely the owner's decision and the cars decision correlate.
So I don't think jury information has to be perfect for them to decide.The insurance companies have incentive to use any standardized method of assigning liability (because long court battles are very expensive) and they can also see how you set up and trained your car in order to set your insurance rates.
And finally, it means that car owners, to avoid liability, would have an incentive to set up their car with a minimum of selfish interest in an emergency, because that will result in the lowest insurance premiums.
after the AI taking action the response of the vehicle, it's technology (steering, brakes, tyres, acceleration, service history) and individual environmental conditions become the limiting factor.
The vast range of factors involved - many of which the AI will not be aware of or learnt to take into account - the AI decision speed and individual vehicle characteristics make the interactions uncertain or at least predictable only within certain limits.
Tibor Rado found a similar result with the Busy Beaver, that is never able to find enough space to determine the computational status of a TM with n states or that BB(n) grew exponentially.
Theoretically, we can think characterizing the kind of structures that exist in all the three aspects: the input data, the algorithmic transformations of data, and the outputs or predictions.
Implementations of AI can be differentiated (or classified) according to the characteristic combinations of these three features.If a new problem involves a combination that is similar to the one that is studied already, then you have a ``base-case''
But it sure would have to be taken care of, during the actual engineering practice.Of course, following such an analysis scheme (of characterizing the structures in the three aspects) is easier said than done.
It never is going to be the case (except possibly in poorly written sci-fi novels / movies / media hype) that an implementation of an AI is some identity-less beast that ``somehow''
Let me give an analogy: Just because you have gears, shafts, cams and stepper-motors readily available in all shapes and sizes in the market, people therefore don't go out and connect them together in a random way, and *then* begin worrying whether the machine would stamp out its human operator or not.
It's high time that the media and the sci-fi authors stopped believing their own hype and stupid theories, and began taking the actual practice of engineering a bit more seriously.3.
If someone (like Google, like Putin, etc) devote a billion dollars to private research into AI, even non-conscious problem solving AI, they might solve problems in many forms of investing, manipulation of markets, etc, with the money to implement such things and take control of economies, micro-target certain markets, and basically legally (perhaps with a modicum of illegally if they don't mind that) win most of the money in the world, and use it as leverage to bludgeon whole governments and populations.You wouldn't be able to stop them, their nets would be proprietary and sequestered away, guarded by private armies.So what then?
For many people, it would no longer make sense to spend a lot of money to buy a car, pay for the insurance and maintenance, be directly liable for any harm the car caused, find parking spots and pay for them, wash and clean it, and store it somewhere when they're home.
As for evaluating relative harm vis-à-vis an elderly homeless man and a pregnant mother of three (Sabine's scenario), that's a great plot for a sci-fi story.
Sabine's sci-fi story would be an oddly dystopian world in which a technologically-advanced society puts great value on the lives of mothers and children but doesn't care enough to solve the very solvable problem of homelessness.
Artificial intelligence designed by human intelligence may prove useful under certain constrained circumstances, but will be as generally satisfying as artificial food.
When Barack Obama was president, market force gurus predicted that mandatory health insurance and raising the minimum wage would increase the price of fast-food hamburgers.
A poorly designed, unsafe e-waste processing plant in a destitute Chinese community is better than *no* e-waste processing plant, at least in the short term.
We tell ourselves that market forces will eventually allow them to move beyond unsafe e-waste processing plants or onerous sweat shops.
Predictably, the market force gurus argued that the rich can afford humane treatment of chickens, but the poor can't afford that luxury and the "efficient"
If someone points to problems with market forces, more often than not he's labeled a socialist or bleeding-heart liberal who doesn't know how the real world works.
Indeed, this is how companies today justify clearly unethical - but legal - behavior, by pointing out their fiduciary responsibilities to stakeholders and shareholders, as well as a general obligation to remain competitive in a market-driven world.
The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device."There are literally thousands of computer vulnerabilities at all levels of computing: in hardware, in software, in networks, and in operating systems.
We will be able to build driver assistance systems that can't be hacked.It should be said that many of the computer security vulnerabilities that exist today are there not because we didn't know about them, but because up until now, no one cared about security enough to want to invest in it, at least not the companies building commercial microprocessors.
We generally know it is a computer that we are interacting with, but we don't necessarily know if the computer is using machine learning to give us the answers it is giving.
How do you prevent that limited access to AI increases inequality, both within nations and between nations?"Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther.
we should think about how to deal with it."I agree that use of advanced machine learning, if placed only in the hands of wealthy individuals or only wealthy countries, would increase inequality.
If you are good at that you will experience, that your visual perception starts to change and the longer you can take that the more you loose your ability to realy see exactly what is there and properties like contrast, color and brightness start to vanish or „move“.
the squareroot operator (a very simple operator) converges towards it‘s Eigenvalue 1 when applied recursively on it‘s own output (with real numbers).
By doing this, they change the activities and capabilities of the neuron networks profoundly.Taking these two large scale self-referential loops, the brain is dual-closed-loop and topologically a torus.
The real killer AI app will come with maxima/minima programs running on a quantum computer using a humongous highly correlated statistical database.
The big breakthrough will be when a quantum AI builds and maintains its own highly correlated statistical database from analyzing random data that is feed into it.For example, how the weather, time of day, internet traffic activity, and solar activity events effects stock market prices.
An aircraft can fly much faster than a bird, but watch birds for a while and marvel at the bio-avionics that allows them to use each feather as an aileron that allows them to end their flight by landing on a tree branch with near perfect precision.
One thing I was glad to see ("Your computer isn’t like my computer") is that substrates should matter more in (computer science) semantics.On robots in the future, a big issue will be the economic one of redistributing the wealth of robot makers and owners to the people the robots replace.
So this doesn't solve the issue of liability if the automatic driver kills somebody.SM: For lots of obvious reasons, driverless cars would reduce insurance premiums to a small fraction of what they are now.Sure, they will be less likely to get into accidents;
but this doesn't address assigning liability when they DO get into accidents, and encounter situations where a moral choice could have been made.SM: For obvious reasons, the behavior of driverless cars needs to be standardized with the aim of minimizing harm."Harm"
Should I progress on a straight line into a large crowd in the street, or intentionally swerve away from the crowd onto the sidewalk and possibly kill ten pedestrians?SM: if our technology ever gets that good, it's unlikely that there will be homeless people.We've already got the solution to homelessness, it is building houses and apartments and care facilities for the mentally disabled and disturbed.
It is socialism and taxes used to care for our relatively small percentage of people incapable of earning a living or taking care of themselves.
We already are "a technologically-advanced society that puts great value on the lives of mothers and children but doesn't care enough to solve the very solvable problem of homelessness."
Why in the world would it surprise you if some great technological advance occurs in AI, without changing humanity's selfish propensity to ignore the suffering of others when addressing it would cost them money?
The potential leap forward is profound: today the talk is about making devices the size of a domestic drone, capable of deciding for themselves and without human supervision who is to be attacked and then doing so.
If a majority of people don't own cars, the moral options are being configured by someone else and these options might not be transparent or particularly agreeable to the passengers.
I haven't forgotten that the only tangible benefit you offered for moral options was reduced liability for car manufacturers by putting "the owner of the vehicle on the moral spot again."
However that also will almost certainly be insignificant.One key problem will be decisions and bias based on using human data.Another, though related, is resolving intrinsic emotional content attached to thought.The biggest is that moral calculus will naturally lead to a decision to exterminate, subjugate or alter humans.Early NNs are evolutionary in nature with the same issues.
presuming technology advances enough to recognize such moral decisions,".SM: Do you really have the right to make that decision for the couple on the side of the road?I'll flip the question: How is it your right to decide these life-and-death dilemmas, by mandating the morality of programmable machines, instead of leaving it to the individual driver?
But to leave existing liability laws intact (which are largely a consensus of society and expert legal opinion) we need to put the onus back on the individual.Do I know how to make the decision?
All again presuming the AI reaches a point where it can identify such dilemmas in time to execute a choice, which I think is possible in principle.SM: You want car owners to have moral options, but you don't want car manufacturers to have moral options?That preserves the law as it stands on most products.
Becoming a passenger you surrender the right to make choices and gain the right to sue.As it stands now, you don't get to pick your human driver based on his moral choices in case of an emergency.SM: Don't pretend you need me to give you a long list of the harm cars can do.
So we should preserve the human endpoint and take morality away from the machines.SM: I haven't forgotten that the only tangible benefit you offered for moral options was reduced liability for car manufacturers by putting "the owner of the vehicle on the moral spot again."The tangible benefit is preserving existing liability law.
I would increase corporate liability if they make the moral choices programmed into robotic servants.SM: It makes more sense to standardize cars that reflect a moral consensus.I proposed a mandatory standard!
Assuming that the insurance company risk models you proposed are reasonably correct and honest, maximum probable harm leads to maximum actual harm - presumably and on average (obviously).
presuming technology advances enough to recognize such moral decisions"How is that relevant to your groundless dismissal of the critical role of reliable predictions for moral decisions and overall safety?Castaldo wrote: How is it your right to decide these life-and-death dilemmas, by mandating the morality of programmable machines, instead of leaving it to the individual driver?
You'll also recall that the entire reason for mandating standards for all cars - using a moral consensus - is to increase overall safety for everyone.
Castaldo wrote: If society comes to a consensus about such morality, then I'm on boardThere's no reason to think societies can't work out a consensus for driverless cars.
I am not surrendering my position on those but I think they are secondary to this issue.I don't believe there is any social consensus on the trolley problems, other than the general chivalric rule of thumb "Women and children first."
I think the Trolley problems are evergreen philosophical problems precisely because the logical solutions to the problems grate on our evolved emotions and psychology and don't feel right, and the dissonance between what looks like the right answer and what feels right is distressing.For example, I will note that in the trolley problems we (humans on average) have this innate sense that taking an action to cause a death feels much more like murder than choosing to do nothing, and that choice leading to a death.
The evidence of that being innate is across cultures and history in which gender fights the wars, enforces the law, and take other dangerous jobs, regardless of the physical demands.
It's understandable if you refuse to sacrifice your life to save many other lives.But our innate biases can be at odds with the straight logic of maximizing the probable years of life based on information available at the time.
For example, even if I were sacrificing my own life (thus removing any future emotional rewards I might experience), I don't think I could choose to let my daughter die in order to save three of her classmates.
And that is divorced from liability, I am not obligated to sacrifice my life to save anyone.Which is why I think the Trolley Problems are evergreen, they probe this dissonance between logic and emotions, and will never be solved to the satisfaction of people, thus there will always be disagreement about what is "moral".
Castaldo wrote: I don't believe there is any social consensus on the trolley problemsIn the context of driverless cars, society would reach a consensus if it makes everyone safer.
Castaldo wrote: [trolley problems] will never be solved to the satisfaction of peopleAnd therefore you want millions of individuals solving 50 unsolvable trolley problems, and somehow that will satisfy people.When people are brainstorming, they have to be willing to admit that some of their ideas might not be so good after all.
Rarely do I see identical problem/solution scenarios, despite in some cases hundreds of iterations of the same processes on similar, if not identical equipment (I now deal with DELL Business-Class products exclusively).In fact, that's one of the biggest turn-offs in the I.T.
There is a massive infrastructure which must work together seamlessly, and it is amazing that it hangs together at all with all of the changes (some of which, to be fair, are forced by the hacking "community").This is why I keep a print version of the OED.
was at least in part explained by someone (with a bad reputation, so I won't name him here) who said that, whereas in the past if someone wanted to eat, he had to plant a crop, we now have a situation with engineers, developers, and designers having to re-invent or "fix"
@Steven Mason says Do you still think it's a great idea to make car owners decide on 50 moral options?I think it is a workable idea that you incorrectly dismiss out of hand for emotional reasons.The idea is not to pre-decide every moral decisions, the idea is to let an artificial intelligence develop a model of how that individual makes their moral decisions.
and the training is to there to determine what to do in ambiguous circumstances that will undoubtedly arise.There is no reason in my mind that AI cannot advance to the point where they comprehend everything from a scene that a human driver can comprehend.So yes, until I am proven wrong, I do think it is fine for how ever many millions of drivers we have to customize their machines to obey their own moral choices, within the law of land.
Just like they are free to choose a safer car or a less safe car, a less polluting car or a more polluting car, a more fuel efficient car or a less fuel efficient car, all within the bounds of what is legal.By your lights, we should all be driving exactly the same car, down to the color, that statistically speaking causes the least harm per mile driven.
And even for the highest levels of human intelligence it is not clear in every situation which action will produce the least harm, so we fall back to moral rules of thumb that vary between individuals.
the morality of the car can match to a high degree the morality of the owner."Castaldo wrote: Until I am proven wrong, I do think it is fine for how ever many millions of drivers we have to customize their machines to obey their own moral choices.This is too funny.
Castaldo wrote: By your lights, we should all be driving exactly the same car, down to the color, that statistically speaking causes the least harm per mile driven.
If human drivers were better at making split-second decisions, we might teach students that it's better to hit an elderly homeless man than a pregnant woman, or it's better to hit the obstacle in front of them than swerve and run over a bicyclist.
Training a neural net with 50 moral scenarios and a set of MY preferred decisions, so it can generalize a model of what my decisions would be in new moral scenarios, is not "pre-deciding"
I DO presume that my moral compass doesn't change constantly, and I DO presume one's moral compass can be accurately modeled by such a procedure.But we may have to disagree on what we mean by "pre-decided", I do not think the output of the neural net when given a new situation is "pre-deciding"
Further, we ARE talking about a physical feature, in both your scenario and mine, the behavior of the self-driving car is controlled by some sort of decision making hardware running some sort of code.
That sets a precedent for eliminating other choices.SM says: It's high time for you to make a fact-based case for how your idea makes everyone safer, It's high time for you to understand that I consider that a misguided goal.
SM says: Even if you think reducing liability for car manufacturers is a more important goal, As I have already written, and you are ignoring, I don't think I AM reducing liability for car manufacturers, I think I am leaving liability where it already resides, with the driver, or the owner of the robotic driver, which would be the car owner.
A jury (of humans) can decide whether their decisions in the training phase reflect the outcome chosen by the equipment, as juries of humans already decide questions of liability in cases where it appears equipment failure was involved.
Here are the two main points: (1) If people are required to customize 50 (or whatever) moral options in driverless cars, will that increase overall safety?
@Steven Mason: (1) Deferring liability can be deferring new liability, which is what your plan will impose on manufacturers by making them responsible for accidents due to driving decisions, for which they are not currently responsible.
Heck, I once peer-reviewed a paper after spending a week on it, then two days later realized it had a fundamental flaw, called the editor, and told him I made a frikkin'
We both know that safety is one of the most important design factors for cars, and trends clearly indicate that safety considerations are becoming ever more important.
Castaldo wrote: your plan will impose on manufacturers by making them responsible for accidents due to driving decisionsA computer scientist is expected to be reasonably rational and logical.
Castaldo wrote: I think we proceed from different fundamental beliefsHow many people do you suppose share your fundamental belief that safety isn't very important in driverless car design?
If dollars are votes, they are voting for style, comfort, utility and cachet more than safety.Manufactures design cars to sell, and they have found safety is one concern but not the top concern of citizens.
on anything, heck the world population can't come to a moral consensus on easy questions like global warming, by what magic will they come to a *binding* moral consensus on when it is appropriate to sacrifice one's own life, perhaps along with your child's life, to preserve the lives of strangers?SM says How many people do you suppose share your fundamental belief that safety isn't very important in driverless car design?That is a logical fallacy;
people can think safety is important without thinking it is the most important, and since what more than 2/3 of consumers with free choice actually buy is not the safest vehicles, trucks, etc, I'd say most of them agree that safety is not the most important thing to consider.
SM says: You want to end this discussion because you can't make the case that your idea is good.I want to end that discussion because I don't think you are being rational, or to be more accurate, I think you are reasoning from different fundamental beliefs, axioms, givens, premises, whatever you like to call them.
Castaldo wrote: If dollars are votes, they are voting for style, comfort, utility and cachet more than safety.Instead of telling stories, offer an example.
The obvious logical fallacy is your refusal to admit that safety is important, as well as your refusal to state what's more important - in the context of moral options for driverless cars (I keep reminding you and you keep "forgetting").
Castaldo wrote: I also think (with plenty of evidence) that such a consensus will inevitably be corrupted by people and corporations with money, to serve their own selfish interests.So a significant portion of current car design standards are a result of corruption and selfish interests?
If you're dedicating your entire career to just deep reinforcement learning and a more practical option for industry comes out, you may end up realising your previous research is not very useful and you won't be able to adapt to new methods.
Furthermore someone graduating with a degree in Computer Science will have a much better understanding of a variety of extremely useful tools, such as optimising algorithms for hardware, efficient data structures, writing more readable and reusable code and working as part of a development team to say a few.
- On 24. oktober 2020
Why AI will probably kill us all.
When you look into it, Artificial Intelligence is absolutely terrifying. Really hope we don't die. ▻ ▻ If you want to support what I do, this is the best way: ...
Sci Fi Movies That Will Completely Blow You Away In 2018
If you're new, Subscribe! → 2018 is shaping up to be a banner year for sci-fi enthusiasts. From a new entry to the Star Wars ..
The World In 2050 [The Real Future Of Earth] – Full BBC Documentary 2018
The World In 2050 [The Real Future Of Earth] – Full BBC Documentary 2018 Buy Billionaire Peter Thiel's Zero to One Book here
The Future of Augmented Intelligence: If You Can’t Beat ‘em, Join ‘em
Computers are getting smarter and more creative, offering spectacular possibilities to improve the human condition. There's a call to redefine Artificial ...
The Rise of Artificial Intelligence | Off Book | PBS Digital Studios
Viewers like you help make PBS (Thank you ) . Support your local PBS Member Station here: Video Sources
The History of Artificial Intelligence
This video was made possible by Brilliant. Be one of the first 200 people to sign up with this link and get 20% off your premium subscription with Brilliant.org!
#219: McKinsey & Company (McKinsey Global Institute) on Artificial Intelligence and Machine Learning
Data and automation have the power to transform business and society. The impact of data on our lives will be profound as industry and the government use ...
What happens when our computers get smarter than we are? | Nick Bostrom
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being.
The Dangers of Artificial Intelligence - Robot Sophia makes fun of Elon Musk - A.I. 2018
The Dangers of Artificial Intelligence - Robot Sophia jokes and makes fun of Elon Musk - A.I. 2017 - 2ndEarth Alternative (22/04/2017) ** Help this channel grow ...
The robot-proof job men aren't taking
Nursing is the job of the future. So why have men stayed away? Subscribe to our channel! It's easy to imagine that the jobs of the future, ..