Children's safety and wellbeing have become utmost important as artificial intelligence (AI) is transforming digital surroundings. With the help of AI-based technologies improved learning experiences and security features are incorporated into social media, entertainment, and education. But these developments also have some negative sides like algorithmic bias, breach of privacy, exposure to hazardous content, deep fake problem, and cyber threats. AI systems are impacting children’s wellbeing by causing risk of psychological trauma and digital exploitation due to a lack of ethical ignorance and regulation. The aim of this research paper is to evaluate current AI-based, spot moral and legal loopholes, child safety systems and suggest methods for the responsible use of AI that prioritize safety of children. For the proper analysis of the situation a qualitative research methodology, which integrates case studies, policy analysis is opted. moreover, Secondary data from AI ethics frameworks, child protection laws, are analyzed to evaluate AI's impact on children's cyber safety. The conclusions reveal that although AI improves child safety through parental controls, threat identification, and content moderation, it also increases dangers because of a lack of transparency, biased algorithms, and the abuse of AI for child-targeted cybercrimes. The study emphasizes the urgent requirement of stricter AI laws, moral AI design to make the internet a safer place for children.