In Brief
Rosalyn unveiled StableSight, an AI system designed to combat various forms of cheating and generative AI use during online exams.
AI-proctoring platform Rosalyn launched StableSight, a AI system that aims to counter academic dishonesty, including the use of generative AI and cheating rings for online exams. According to the company, StableSight’s AI is adept at identifying deceptive practices, even when individuals employ separate devices to execute cheating strategies during examinations.
“The advent of generative AI has further unlocked opportunities for cheating, potentially triggering a virality akin to a network effect. StableSight by Rosalyn has been meticulously crafted in response to the evolving landscape of online education and examination,” Noor Akbari, CEO of Rosalyn told Metaverse Post. “It is engineered to counteract not only the traditional forms of cheating but also the complex challenges posed by generative AI and organized cheating syndicates.”
StableSight comprises of multiple features to counter sophisticated cheating methods. The system’s gaze-tracking model detects the use of secondary screens, a common method employed for cheating, while the Keyboard Correlation Model adds a layer of security by predicting typed text based on keyboard sound analysis.
Notably, Rosalyn’s approach involves escalating suspected cases of cheating to human reviewers, prioritizing fairness and accuracy over automated conclusions to safeguard the rights of test-takers.
To showcase its efficacy, Rosalyn has introduced an invite-only gamified portal — enabling individuals to test their cheating detection skills against StableSight. The portal offers 20 recorded exam sessions, allowing participants to test their abilities against the vigilant AI proctoring system.
The company assert that its AI proctoring services have been adopted by noteworthy organizations such as the U.S. Department of Defense, Coursera, Stripe, Nexford University, Dominican University and Missouri Baptist University.
AI’s Vital Role in Combating Online Exam Cheating
That the demand for professional certifications in fields like project management, data science, DevOps, IT service and cybersecurity has surged. Statistics from the US Bureau of Labor indicate a significant wage disparity, with credentialed workers earning 32% more than their non-credentialed counterparts.
Rosalyn believes that this wage gap as contributed to an increased incentive for individuals to devise increasingly inventive cheating tactics, increasing the necessity of stringent anti-cheating measures in online testing environments.
“The use of AI not only enhances security by continuously updating and learning to detect new cheating methods, but it also maintains the integrity and credibility of certifications and degrees by ensuring strict adherence to exam protocols. StableSight’s AI-driven approach, with its deep learning algorithms and pattern recognition, is designed to provide a fair testing environment for all candidates,” Rosalyn’s Akbari told Metaverse Post.
The platform’s proactive approach aligns with a significant rise in reported incidents of cheating, especially in online testing scenarios.
A 2022 academic study highlighted that 35% of undergraduate business students admitted to cheating during online tests amid the COVID-19 pandemic. Additionally, collaborative research conducted by Rosalyn and a language testing partner revealed that StableSight identified 120 out of 1,500 test takers engaged in cheating rings.
“The growing demand for remote proctoring has, unfortunately, led to an increase in innovative cheating methods. As more exams are conducted online, students and candidates often use sophisticated techniques to bypass their proctoring system when taking an exam,” explained Akbari. “These methods range from using multiple monitors, screen sharing software and utilizing generative AI models to answer questions.”
But, imbalanced consideration of certain signals/indicators during the exam process, combined with the risk for bias in AI models, can lead to AI proctoring platforms generating false flags. Left unchecked, an unmodulated AI model void of human decision-making can actually make the experience for examinees much worse.
Likewise, relying solely on traditional human proctors increasingly proves to overlook highly nuanced indications of online cheating, including those facilitated by technologies like generative AI.
To tackle this challenge Rosalyn said that Stablesight leverages human and artificial intelligence together for accuracy in detecting online cheating.
“By integrating human proctoring into our proprietary system, we add an extra layer of legitimacy that helps StableSight distinguish true violations from false flags. This ensures that intervention in a test-taking scenario is only implemented when absolutely necessary, and that’s intervention is absolutely warranted,” Rosalyn’s Akbari told Metaverse Post.
In addition to its advanced anti-cheating measures, StableSight’s Keyboard Correlation Model enhances security by predicting text being typed based on keyboard sound analysis, thwarting attempts to use concealed devices for dishonest purposes. Rosalyn’s AI escalates suspected cheating to human reviewers before flagging anyone for cheating.
“We understand that students may feel anxious about being monitored by AI, so we’re working on making our systems more transparent and user-friendly, reducing any perceived invasiveness while maintaining effectiveness,” added Akbari. “Ultimately, our goal is to create a balanced ecosystem where technology enhances the online testing experience, upholds the integrity of online credentials and respects student privacy and comfort.”
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
Victor is a Managing Tech Editor/Writer at Metaverse Post and covers artificial intelligence, crypto, data science, metaverse and cybersecurity within the enterprise realm. He boasts half a decade of media and AI experience working at well-known media outlets such as VentureBeat, DatatechVibe and Analytics India Magazine. Being a Media Mentor at prestigious universities including the Oxford and USC and with a Master’s degree in data science and analytics, Victor is deeply committed to staying abreast of emerging trends.
He offers readers the latest and most insightful narratives from the Tech and Web3 landscape.
Victor Dey
Victor is a Managing Tech Editor/Writer at Metaverse Post and covers artificial intelligence, crypto, data science, metaverse and cybersecurity within the enterprise realm. He boasts half a decade of media and AI experience working at well-known media outlets such as VentureBeat, DatatechVibe and Analytics India Magazine. Being a Media Mentor at prestigious universities including the Oxford and USC and with a Master’s degree in data science and analytics, Victor is deeply committed to staying abreast of emerging trends.
He offers readers the latest and most insightful narratives from the Tech and Web3 landscape.
Read More: mpost.io