Soon If you’ve always wanted to program your Nvidia GPU to speed up machine learning, image processing, and other workloads, but find Nv’s CUDA too overwhelming or too complicated to learn, you’re in luck.
OpenAI released Triton late last month, a Python-based environment that attempts to help developers write and compile code to run on their Nvidia GPU much more easily without having to deal with CUDA.
The San Francisco upstart has been using Triton to optimize his software so that his machine learning algorithms run more efficiently on specialized hardware. Building state-of-the-art models is expensive, developers must be able to train and tune their performance quickly, which requires writing custom GPU cores.
“We are releasing Triton 1.0, an open source programming language similar to Python that allows researchers with no CUDA experience to write highly efficient GPU code, most of the time on par with what an expert could produce,” he said. OpenAI. . “Triton allows you to achieve maximum hardware performance with relatively little effort; for example, it can be used to write FP16 matrix multiplication cores that match the performance of cuBLAS, something many GPU programmers cannot do, in less than 25 lines of code. “
You can read more about Triton and its documentation. here. Support for other GPUs, like AMD, is said to be coming.
Computer Detected Bullet Evidence Withdrawn From Trial
Prosecutors in the United States have withdrawn from a murder trial evidence of what was said to be a gunshot detected by classification algorithms.
One night in May last year, Safarain Herring, 25, was shot in the head and died two days later in hospital. Michael Williams, 64, was loaded with his murder, and denies wrongdoing: He said Herring was killed by someone else in a drive-by shooting. Williams is said to have taken Herring to St Bernard Hospital in Chicago.
The cities of the United States have a system built by ShotSpotter dotted through its streets; this consists of microphones connected to computer systems programmed to identify the sound of gunfire and automatically alert the police to the location.
One of the evidence against Williams claims that ShotSpotter sensors in Chicago identified shots where surveillance cameras had seen Williams stop his car in a block on the South Side of Chicago, just when and where police said Herring was shot.
However, Williams’ attorney submitted documentation [PDF] stating that ShotSpotter actually detected a firework a mile away from that location, and that ShotSpotter later reclassified the explosion as a shot and the location as where Williams was seen on camera, Vice first reported.
Williams’ attorney demanded that the court conduct an investigation into the ShotSpotter evidence, and prosecutors simply withdrew it.
ShotSpotter answered by extensively denying that he improperly tampered with any data or evidence, and counterattacked any suggestion he had made to help the police present a case. It said its software generates real-time alerts automatically, and staff then analyze the microphone readings to send forensic reports to the courts, so these final reports may differ from the initial alerts.
“The idea that ShotSpotter ‘tampers’ or ‘fabricates’ evidence in any way is an outrageous lie and would be a crime,” he said in a statement. “We follow the facts and data for our forensic analysis. Period.”
Troubled Apple Watch data for a health study
The algorithms that are used to monitor things like heart rate and sleep patterns running on Apple Watches may not be helpful in academic research.
JP Onnela, associate professor of biostatistics in the branch of Harvard University’s School of Public Health, discovered this the hard way when he asked collaborator Hassan Dawood, a researcher at Brigham and Women’s Hospital, to upload the heart rate data. registered by your Apple. Clock.
To their surprise, when they exported data twice from the same samples taken during the same time period, they discovered a large discrepancy between the recordings. The same heart rate reading exported once on September 5, 2020, and again on April 15, 2021 should be the same, but they were different.
Onnela acknowledges that Apple’s code could be the culprit. “These algorithms are what we would call black boxes, they are not transparent. So it’s impossible to know what’s in them, “he said. He said The Verge. The lack of transparency means that Apple may have modified its software, making it difficult for researchers to trust the data collected by iGiant devices.
Apple, however, said that there was no problem with its algorithms and that the problem probably lies with the export process. Either way, the bugs show that it is probably not a reliable data source to be used for academic purposes.
You can read more about the experiment. here. ®