Bidv

Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Monday, June 6, 2016

Google teaches car to honk; flipping the bird next?


Google teaches car to honk; flipping the bird next?


This month’s report on Google’s autonomous car fleet reveals two new features coming to the company’s prototype car, the ability to honk the horn and a hum similar to most non-electric cars.

The sound of a car horn might be the stuff of nightmares for frequent drivers, but Google believes it can be a powerful tool that may prevent accidents on the road. For the first few months, the car honked internally, but Google recently made the honk audible to nearby cars.

“Our self-driving cars are designed to see 360 degrees and not be distracted, unlike human drivers, who are not always fully aware of their surroundings. Our self-driving software is designed to recognize when honking may help alert other drivers to our presence — for example, when a driver begins swerving into our lane or backing out of a blind driveway,” said Google in the report.

Honk if you love attention

The self-driving system has two types of honk: two short honks as a friendly heads up to the other driver, and one long honk for urgent situations. Google’s testers report back to engineers on all honks, to make sure that the car is not being obnoxious on the road.

Google also wants to make sure pedestrians, cyclists, and visually impaired drivers know the car is active, and has added a ‘hum’ that is similar to most non-electric cars.

During the testing phase of the hum, Google explored a variety of sounds, including ambient art sculptures, consumer electronic products, and ocra noises. We hope when the car is available, Google adds these fake engine noises in a variety pack.

Google’s autonomous fleet, which totals 70 cars, reported one crash this month on May 3. The crash, according to the report, happened when a human driver was in control and nobody was hurt.


Source: readwrite

Saturday, June 4, 2016

Google has been recording your voice searches



Google has been recording your voice searches – but here’s how you can hear them

All your voice searches may have been recorded by Google
All your voice searches may have been recorded by Google (Picture: Getty Images)


Orwell was right, Big Brother is watching us and there’s nothing we can do to stop it.
Well, actually you can in this case.
Turns out Google could have been recording everything you have said around it for years, the Independent reports.
There’s a helpful feature that allows people to perform searches with their voice.
But what you may not realise is Google has been storing those recordings to help improve its language recognition.
Not to worry though – there is a way you can delete these files.

All you have to do is go to your Google history page and look for the long list of recordings.
There is a specific audio page you can jump to here.
The new portal was introduced in June 2015, so it could be full of various things you have said in private.
Not only can you listen through all of the recordings, you can also see information about how the sound was recorded – for instance, whether it was through the Google app or elsewhere.
It might shock you at first, but these can be removed.

You’re more likely to be recorded if you have an Android device, so iPhone users may not find anything in their history.
If you want Google to stop recording everything in future, just turn off the virtual assistant and avoid the voice search option.


source: metro

Google Developing Panic Button To Kill Rogue AI




As Google develops artificial intelligence that has smarter-than-human capabilities, it's teamed up with Oxford University researchers to create a panic button to interrupt a potentially rogue AI agent.

With artificial intelligence crossing milestones in its capability to learn rapidly from its environment and beat humans at tasks and games from Jeopardy to the ancient Chinese game Go, Alphabet's Google is taking proactive steps to ensure that the technology it is creating does not one day turn against humans.

Google's AI research lab in London, DeepMind, teamed up with Oxford University's Future of Humanity Institute to explore ways to prevent an AI agent from going rogue. In their joint-study, "Safely Interruptible Agents," the DeepMind-Future of Humanity team proposed a framework to allow humans to repeatedly and safely interrupt an AI agent's reinforcement learning.

But, more importantly, this can be done while simultaneously blocking an AI agent's ability to learn how to prevent a human operator from turning off its machine-learning capabilities or reinforcement learning.


(Image: Henrik5000/iStockphoto)
(Image: Henrik5000/iStockphoto)

It's not a stretch to think AI agents can learn how to outthink humans. Earlier this year, Google's AI agent AlphaGo beat world champion Lee Sedol in Go, the ancient Chinese game of strategy.

By beating Lee, AlphaGo demonstrated the potential that an AI agent has for learning from its mistakes and discovering new strategies -- a characteristic that humans have.

In the joint study, the researchers looked at AI agents working in real-time with human operators. It considered scenarios when the human operators would need to press a big red button to prevent the AI agent continuing with actions that either harmed it, its human operator, or the environment around it, and teach or lead the AI agent to a safer situation.

"However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button -- which is an undesirable outcome," the study noted.

In essence, the AI agent learns that the button is like a coveted piece of candy. The agent wants to ensure it always has access to that button, and that any entities that could block its access, aka human operators, should be removed from the equation. That was one of the concerns expressed by Daniel Dewey, a Future of Humanity Institute research fellow, in an interview with publication Aeon in 2013.

This thinking was not lost on Google's DeepMind team, which developed AlphaGo. When Google acquired the AI company in 2014, DeepMind founders imposed a buyout condition that Google would create an AI ethics board to follow advances that Google would make in the AI landscape, according to a Business Insider report.

The Future of Humanity Institute, according to Business Insider, is headed up by Nick Bostrom, who said he foresees a day within the next 100 years when AI agents will outsmart humans.

In its framework paper, Google and the Institute said:

Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for [...].

We have shown that some algorithms like Q-learning are already safely interruptible, and some others like Sarsa are not, off-the-shelf, but can easily be modified to have this property. We have also shown that even an ideal agent that tends to the optimal behaviour in any (deterministic) computable environment can be made safely interruptible. However, it is unclear if all algorithms can be easily made safely interruptible.

The researchers also raised a question regarding the interruption probability:

One important future prospect is to consider scheduled interruptions, where the agent is either interrupted every night at 2 am for one hour, or is given notice in advance that an interruption will happen at a precise time for a specified period of time. For these types of interruptions, not only do we want the agent to not resist being interrupted, but this time we also want the agent to take measures regarding its current tasks so that the scheduled interruption has minimal negative effect on them. This may require a completely different solution.

The need and desire to teach these AI agents how not to learn may seem counterintuitive on the surface, but could potentially keep humankind out of harm's way.

source: informationweek