Remember back in 2016 when everybody was send those Mannequin Challengevideos ? Well , it turns out that instead of collecting dust in Ye Olde Meme Archive , Googleresearchersare using the videos to aid educate robots to better sail their surround .
While humans are course able to look at a 2D video and understand it was filmed in a 3D space , robots are n’t so ripe at that yet . That ’s part of the ground why automaton struggle to autonomously navigate newfangled areas , and its also a challenge when it comes to build self - driving car .
wrick out , the Mannequin Challenge present the perfect data set for teaching automaton how to perceive depth in a 2D image . If you go on to live under a rock in 2016 , thechallengeinvolved a radical of people immobilise in topographic point — often in active poses — while the person record move around capturing the shot from multiple angles .

Photo: Getty Images
https://gizmodo.com/2001-a-space-odyssey-is-even-more-of-an-acid-trip-when-1781404255
Of the unnumbered videos uploaded to YouTube , the research worker selected 2,000 of them . They then filtered the clips to remove those unsuitable for training — if someone , say , unfroze , used fisheye lenses , or had synthetic backgrounds that could precede to borked results . The final data set was then used to educate a neuronal web that could predict the depth of a moving object in a video recording . fit in to the paper’sconclusion , accuracy was much higher using this method than previous state - of - the - art method acting .
There are some limitation , however . The researchers noted that their method may not be quite so exact when it comes to cars and trace . However , they did make their datum jell populace . So , how do you know if your special Mannequin Challenge television was used in the set ? Short response is : You do n’t .

According toMIT Technology Review , which initially report on the field , AI investigator commonly skin publicly available images to train bot . And the more forward-looking the model researcher use , the more datum they demand to train the neural networks . So if you upload a video to YouTube , and an AI research worker materialise to think it helps instruct a neural web how to substantially navigate , well , you uploaded your video and made it publicly available .
Microsoft recentlydeleted its Celeb databaseof 100,000 faces from the internet . Though it was supposedly public figures only , it was found that faces of secret individuals also made their way into the solidifying . Plus , while the set was designate to be used for donnish purposes only , the MS Celeb solidification has been used by individual company — including those in China work facial credit surveillance .
Sure , that can be a bit unsettling , but do n’t lease that halt you from share your motion picture and # livingyourbestlife . Just keep in mind there ’s a luck that maybe Instagramming your pizza could also beteaching a machine how to manipulate .

Robots
Daily Newsletter
Get the best technical school , science , and finish news in your inbox daily .
News from the time to come , delivered to your present .
You May Also Like













![]()