AI News, The most powerful person in Silicon Valley artificial intelligence
The most powerful person in Silicon Valley
A key element of this value creation comes from connecting companies to help each other grow.
Only a month earlier, after committing $45 billion to back a second fund, bin Salman told Bloomberg that without Saudi backing, “there will be no Vision Fund.” As gruesome details about the murder emerged, the pressure on Son became intense.
“No one wants to be connected with blood money.” Some of Son’s Vision Fund companies publicly tried to distance themselves from Saudi Arabia (Compass’s Reffkin issued a statement saying, “The death of Jamal Khashoggi is beyond disturbing because the freedom and safety of the press is something that is incredibly important to me.”) Uber’s Khosrowshahi and Arm’s Segars pulled out of a major Saudi investment conference in Riyadh in October.
“As horrible as this event was, we cannot turn our backs on the Saudi people as we work to help them in their continued efforts to reform and modernize their society,” Son said in a statement.
He is not going to turn his back on $45 billion.” The global network that Son has built during his four-decade career is as vast–and important to him–as his war chest, friends say.
Son has also made clear that the Vision Fund is very much open for business, announcing a slew of new deals, including $1.1 billion for View (a maker of “smart” windows), $375 million for Zume (which builds robots that can cook), and that lead investment in ByteDance and its AI-powered news and video apps.
Elizabeth Dwoskin On The Reckoning of Silicon Valley | Bioneers
No one is better equipped to help us understand the perils and promise of what is happening in Silicon Valley than Elizabeth Dwoskin.
As The Washington Post’s Silicon Valley correspondent since 2016—before which she was The Wall Street Journal’s big data and artificial intelligence reporter—she has become the most penetrating observer and critic of the tech scene.
She has broken many crucial stories on data collection abuses, online conspiracies, Russian operatives’ use of social media to influence the 2016 election, gender bias in the tech world, Instagram as a vehicle for drug dealing, and many more.
Dwoskin may even be one of the most important investigative journalists of our era because she is relentlessly and insightfully tracking the forces that have the potential to dramatically change the fate of our species.
In her keynote address at Bioneers 2018, Dwoskin discussed the Facebook monopoly, the harmful effects of social media on society and its users, and Silicon Valley’s shady and reckless business dealings and handling of private data.
Facebook would buy it, as it did Instagram, overpower it, as it is doing to Snapchat, or the mere presence of the social giant would dissuade people from building social products, as is happening across the Valley right now.
But her critique also extended to the role that technologists play in engendering these problems, the minute but impactful engineering decisions and choices that arise from a culture that is hyper-focused on growth and on commanding attention, often at the cost of well-being.
Another one is the so-called recommendation engine orchestrated so that when you click on one image of a hedgehog you get 1,000 more, countless nudges and hooks, infinite micro-decisions that in my view comprise an untold history of Silicon Valley.
Robert Lustig, one of the doctors who proved that sugar was addictive, is today seeking to demonstrate that excessive technology use lights up the same destructive pathways in the brain as sugar does.
If a person engages, well then they must like it, and therefore, if you do things to induce them to engage, if you tweak and test your way to hyper growth and hyper engagement, it’s all okay.
At least one powerful executive, the one who sat among the supporters behind Judge Kavanaugh in his Congressional hearings a few weeks ago, argued that they should take very limited action, because a lot of the sites that trafficked in sensationalism appeared to be more right-leaning, and they did not want to risk appearing biased against the right.
Tech companies are actually moving away from the scrolling news feeds that we have come to be used to, and are starting to emphasize private messaging in closed communities that are not visible to the broader public.
You join one group and Facebook algorithm shows you similar groups to join, groups that were joined by people who are similar to you or in the group that you already joined.
So you join one extremist group and, well, now you’re in this ugly extremist echo chamber that software designers maybe didn’t create but have certainly amplified.
The Internet, and particularly the Smartphone’s uncanny ability to profile you and find you wherever you are means that once you’re in a certain bucket, you’re likely to be pummeled with similar messages.
This law changes the whole way that Europeans regulate privacy, and one little known part of it is that it requires that companies delete a lot more records that they’ve been doing until now.
Society’s urgently telling them to protect people’s privacy, but we are also telling them to have complete visibility, and to increasingly police that content, and make judgment calls about the nature of that content.
The exception to this painful decision making interestingly enough may soon be Twitter, which recently introduced rules prohibiting content that results in real world harm.
And I think in these moments about how important it is to retain my own sense of shock, because it’s very easy when you’re in this line of work to get numbed, and most journalists have the heard-it-all-before effect, but for me, I think staying in touch with my feelings and with myself as a human, even before I’m a journalist, for me that’s the most important driver.
From there we moved to Twitter, where we showed how powerful Americans, influential Americans, came to be duped into retweeting content from Russian impersonators, often people they disagree with that they thought they were fighting with, but they were fake fights.
He knew that the goal of Facebook ads isn’t just to sell products, it’s to lure people into click-liking your brand, and becoming your Facebook friend so that you can send them more content, but this time for free.
In this arc, we’ve gone from the highs of tech companies taking credit for the pro-democratic uprisings in the Middle East in 2011 to the lows of Russian meddling in Cambridge analytica today.
Tim Kendall, the former Facebook executive who’s become an anti-addiction crusader, says this approach is like telling an alcoholic to stop drinking because they drank too much last night.
We’re not spammers, we’re just doing what political activists are doing every day online.” And their decision to do this is driving a stake through the heart of what online organizing means today.
But it’s more likely, I think, that change will come from outside forces – lawsuits, state attorney generals, regulators in the US and abroad, and the politicians from both sides of the aisle and from across the pond that are increasingly demonizing tech, sometimes in ways that go way, way too far.
State-level lawsuits are particularly important because they sidestep the broken political process at the federal level, and discovery in a lawsuit is important because it may give clues to people’s mindsets and intents, and that’s why tech companies are fighting them hard right now.
For my part, I will go back to my desk tomorrow morning, I’ll get my coffee and prepare to spend the day confronting companies that are wealthier and more powerful than nation states.
Hiring For The AI (Artificial Intelligence) Revolution - Part I
In the coming years, Artificial Intelligence (AI) is likely to be strategic for a myriad of industries.But there is a major challenge: recruiting.
Simply put, it can be extremely tough to identify the right people who can leverage the technology (even worse, there is a fierce war for AI talent in Silicon Valley).
think it's critical for 'AI' teams (natural language processing, machine learning, etc.) to have a mix of backgrounds -- hiring Ph.D's and academics who are thinking about and building the latest innovations, but combining that with individuals who have worked in a business environment and know how to code, ship product and are used to the cadence of a start-up or technology company.
Increasing sophistication in automating key aspects of building, training and deploying AI models (such as model selection, feature representation, hyper parameter tuning, etc.) mean the skillset needed must be focused on model lifecycle and model risk management principles to ensure model trust, transparency, safety and stability.
Guy Caspi, CEO and co-founder at Deep Instinct: People who have strong academic backgrounds sometimes lean towards one of two directions: either they cannot leave a project until it’s perfect, often missing important deadlines – or the opposite: they’re satisfied with basic academic-level standards that may not meet an organization’s production requirements.
Can we make artificial intelligence ethical?
AI's greatest advocates describe the Utopian promise of a technology that will save lives, improve health and predict events we previously couldn't anticipate.
This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI's effect on privacy and responding to employee fears about AI's impact on jobs.
That's especially true in cases in which human judgments are necessary to identify content that is 'inappropriate', or areas such as marketing where companies must ensure AI doesn't inadvertently apply biases.
An ethical approach to AI requires coming up with a long-term understanding of the values we want to see reflected in this technology - and shaping rules that create confidence AI's applications will reflect those values.
But universities - full of critical thinkers, insulated from short-term market pressures and focused on big ideas - are the proper venues for advancing the technology and the implications they bring.
The federal government will need to provide significant increases in funding for AI to help the United States maintain its technical edge and step up its coordinating role in the ethics and workplace arenas.
The sooner we come to an understanding of AI that ensures its powerful capabilities are a net positive for people and workers, the more wisely we can develop and deploy it.
- On Monday, June 1, 2020
The First Church of Artificial Intelligence - Creating Their AI God
Inside the First Church of Artificial Intelligence Anthony Levandowski makes an unlikely prophet. Dressed Silicon Valley-casual in jeans and flanked by a PR rep ...
How a handful of tech companies control billions of minds every day | Tristan Harris
A handful of people working at a handful of tech companies steer the thoughts of billions of people every day, says design thinker Tristan Harris. From Facebook ...
Bill Gates interview: How the world will change by 2030
The Verge sat down with Bill Gates to talk about his ambitious vision for improving the lives of the poor through technology. It just so happens that The Verge ...
New Brain Computer interface technology | Steve Hoffman | TEDxCEIBS
Brain Computer interface technology opens up a world of possibilities. We are on the cusp of this technology that is so powerful and has the potential to so ...
China: "the world's biggest camera surveillance network" - BBC News
China has been building what it calls "the world's biggest camera surveillance network". Across the country, 170 million CCTV cameras are already in place and ...
Michio Kaku: How to Stop Robots From Killing Us
Even if computer technology continues to double every 18 months—which is doubtful—we could put a chip in robots' brains to shut them off if they start to get ...
My Future Prediction - Masayoshi Son
Masayoshi Son (Japanese: 孫 正義 Hepburn: Son Masayoshi?, Korean: 손정의 Son Jeong-ui; born August 11, 1957) is a Korean-descendant (Zainichi Korean) ...
US is still in the lead on artificial intelligence: Arrow Electronics CEO
Arrow Electronics CEO Mike Long discusses his outlook on artificial intelligence and President Trump's AI initiative.
Under The Spell of Hollywood Part 1
This is a long video with a ton of hard hitting content. Add it to your playlist so you can watch it all. Please share it with your friends and family. Time is short folks!
How Silicon Valley Helps the Government Track All of Us.
New AI tools could empower the government to violate our civil liberties. Subscribe to our YouTube channel: Like us on Facebook: ..