While artificial intelligence promises to revolutionize the learning and entertainment of the youngest, its malicious use in certain games and connected toys raises serious concerns in Tunisia. Between incentives to dangerous behavior, abusive collection of data and illicit content, the risks are multiplying for children and their families.
Online games and psychological manipulation
In recent months, several reports have alerted the Tunisian authorities to allegedly “ludo-educational” entertainment applications that encourage children to reproduce risk gestures, even to adopt self-destructive behavior.
School psychologists have noted in some students an increasing dependence on these games, characterized by constant pressure to reach “secret levels” and daily challenges.
“We have seen children refusing to sleep or eat as long as they had not unlocked the famous” level 7 “, says a teacher from Tunis. These instant gratuity mechanisms, supplied by AI, exploit the vulnerability of development minds. »»
“Intelligent” toys: a disguised espionage
Several parents today testify to the installation, without their knowledge, microphones and sensors in toys sold as interactive. These devices not only record the child’s voice, but also his playing habits and his emotional reactions. When connected to a mobile application, they transmit these data in real time to servers whose level of security remains unknown.
“My son played with a small robot supposed to teach him notions of calculation,” says a mother. “I discovered that the robot systematically asked me for authorization to access my address book and my GPS position. »»
Inappropriate content generated by AI
Another more insidious danger is the appearance of unsuitable or shocking content automatically created by certain algorithms: images, stories or animations that can stage children in situations of violence or nudity. These contents, sometimes available via extensions or third -party modules, escape classic moderation filters, because they do not come from referenced human sources.
“AI is now able to generate a full scenario in a few seconds,” alerts a childhood protection association manager. “Without supervision, the risk of exposure to traumatic images becomes exponential. »»
Towards a coordinated response
Faced with these drifts, several Tunisian actors call for joint action:
Strengthening legislation: Adaptation of existing texts to explicitly frame the use of AI in products intended for minors, including transparency and data security obligations.
Certification of connected toys: Creation of a national label “IA Safety for children”, allocated after technical audit of devices and verification of compliance with privacy standards.
Awareness and training: Deployment of workshops in schools and cultural centers to train parents and teachers in digital handling mechanisms and parental control tools.
Watch and report: Implementation of a government platform where anyone can report a game, application or suspicious toy, with rapid treatment with a specialized cell.
Artificial intelligence conceals an immense potential to stimulate curiosity and learning in children. However, without solid safeguards, it can turn into a vector of manipulation, abusive surveillance and exposure to dangerous content. In Tunisia, awareness is underway: it now remains to translate these alerts into concrete actions to guarantee a really benevolent digital environment for the youngest.