Subscribe to our newsletter for the latest updates, tutorials, and QWERKY news.
Latest news, announcements, and updates from the QWERKY team
September saw the team at Qwerky expand in a very exciting way: with our new server, stocked with 8 NVIDIA RTX Pro 6000 Blackwell Max-Q Workstation Edition cards!
For decades, researchers have chased the fantasy of superintelligent systems that are smarter, more capable, and perhaps even godlike. Does this mean we are on the precipice of domination by machines?
In our third (and final) part, we’ll turn our attention to three novel ways of combining these two approaches and see how they attempt to address these issues.
In part 2, we’ll take a closer look at some of these issues and problems that came up, particularly the ones that were later addressed with some of the approaches that are currently still in use. As we’ll see, for all of the problems that they do solve, stochastic approaches bring unique challenges of their own.
This series will attempt to answer the question of how we got from deterministic to probabilistic approaches (part 1), from probabilistic ones back to the problem of hallucinating math answers (part 2), and will end by sketching some ways in which contemporary AI research has attempted to address these issues (part 3).
A recent study from the MIT Media Lab, "Your Brain on ChatGPT," offers a compelling empirical analysis of the cognitive ramifications of utilizing Large Language Models (LLMs) in academic writing. The research has implications for pedagogy and cognitive science, introducing the concept of "cognitive debt" to describe the neurological and performance-related consequences of outsourcing intellectual labor to artificial intelligence. My analysis of this work finds it to be a solid contribution to the discourse on AI in education, though with some issues.