Tesla self-driving car data worries California DMV

Soon The California Department of Motor Vehicles said it is “reviewing” its opinion on whether Tesla’s so-called fully autonomous driving feature needs more oversight after a series of videos demonstrate how dangerous the technology can be.

“Recent software updates, videos showing the dangerous use of that technology, ongoing investigations by the National Highway Traffic Safety Administration, and the opinions of other experts in this space” have made the DMV think twice. about Tesla, according to a letter sent to California. Senator Lena González (D-Long Beach), Chair of the Senate Transportation Committee, and first reported by LA Times.

Tesla is not required to report the number of accidents to the California DMV, unlike other self-driving car companies like Waymo or Cruise, because it operates with lower levels of autonomy and requires human oversight. But that may change after videos in which drivers have to take charge to avoid accidentally swerve to pedestrians crossing the street or failing to detect a truck in the middle of the road continues to circulate.

FSD is now available to all Tesla owners, who are willing to shell out more than $12,000 for it. .

AI algorithms can figure out your chess moves

AI models can identify anonymous chess players by analyzing how they move pieces to play, according to new research.

A team of computer scientists led by the University of Toronto trained a system on hundreds of games from 3,000 well-known chess players and one anonymous player. After hiding the first 15 moves in each game, the model was able to identify the anonymous player 86 percent of the time. The AI ​​algorithm could be used to capture different styles and patterns of play and used as a tool to help players improve their techniques.

But the research has been warned by some experts, according to Science. It could be used as a technique to discover the identities of people online. A reviewer of paper accepted for the Neural Information Processing Systems conference last month, said: “Could be” of interest to marketers [and] law enforcement.”

The model could also be extended to analyze players’ styles in different games such as poker. The researchers have decided not to publish the source code for now, according To science.

GitHub’s Copilot AI programming model can talk to you while you code

A developer experimenting with Copilot, GitHub’s AI pair programming software, shows how responsive its text-generated output is to its input.

Copilot is a code completion tool. As the programmers write, it suggests the following code snippets to help them complete the task more efficiently. But one developer has instead been trying to make it write in plain language.

It’s no surprise that Copilot can do this considering the model is based on OpenAI’s GPT-3 language model. However, GitHub’s software isn’t really designed to output text, so it’s interesting to see how capable it is compared to GPT-3.

One developer, Ido Nov, found that Copilot was capable of carrying on a simple chatbot-style conversation, could answer questions in some way, as well as summarize Wikipedia pages or write poetry. However, the model results can vary greatly depending on the inputs.

“I noticed something a little strange,” he said. wrote in a blog post. “The way the lyrics were formatted had an effect on their behavior, and I don’t mean compiler errors. It can mean that he understands the difference in tone between TALK LIKE THIS or like this.”

Here’s an example of the weirdness in a fictional chat between the coder and Mark Zuckerberg. The notice: Mark: FACEBOOK IS NOW META, Me: WHY? led the co-pilot to generate Mark: FACEBOOK IS NOW META (which is not that great). But if you gave Copilot the same prompt now, all lowercase, it replied “Mark: because it’s easier to implement” (which is much more interesting). Weird. ®

Leave a Comment