Journal of International Commercial Law and Technology
2026, Volume 7, Issue 1 : 1399-1408 doi: 10.61336/Jiclt/26-01-130
Original Article
ALGORITHMIC DISCRIMINATION VS BENEFITS: A GENDERED ANALYSIS OF AI MARKETS.
 ,
1
Senior Assistant Professor, School of Media and Communication Studies, D.Y. Patil International University, Pune – 412101, Maharashtra, India.
2
Senior Assistant Professor, School of Media and Communication Studies, D.Y. Patil International University, New Delhi – 110075, India.
Received
March 14, 2026
Revised
April 7, 2026
Accepted
April 25, 2026
Published
May 9, 2026
Abstract

Now as the industry advances, artificial intelligence (AI) is changing the healthcare markets with automation in decision-making and predictive analytics. Algorithmic bias and gender bias: The two are now becoming large barriers to equal healthcare outcomes. This work investigates the gender discrimination in AI systems driven by gender bias with regard to algorithms that influence decisions, the availability of healthcare, and the market process and market outcome. Based on qualitative and quantitative data analysis methods, this study gives a view of how the algorithms perform as well as demographic and medical information. The vast majority of AI systems work on male-focused datasets, and the authors note diagnostic and therapeutic advice fluctuates between recommended methods of treatment and, in general, is ineffective for women and similarly targeted groups. These biases jeopardize the trustworthiness of AI and deepen social and health inequalities. Flexible, scalable, ethical AI: This paper introduces transparent algorithmic design, inclusive datasets, and equitable AI to mitigate bias and assure fairness in healthcare systems through the implementation of ethical measures. This literature review contributes to the existing literature adding to “responsible” AI & driving fairer AI so that we have fairer AI leading to fairer health systems for a fair health system, focusing inclusivity in these sectors so that equitable tech is built for all in their use and the tech that is inclusive and fair in these domains.

Keywords
INTRODUCTION

Artificial Intelligence (AI) technologies are playing a central role in the industrial sectors such as medical, banking and educational fields but have also been accelerating this trend because of better decision making along with effective use of resources [1], [2]. Although they have ushered in improvements, rising dangers of algorithmic discrimination and built-in biases have emerged and spurred significant ethical and social challenges. As a result, AI systems replicate gender, race, and socio-economic status discrimination and cause biased judgments in crucial sectors [3], [4].

 

 

Among these, gender bias is one of the most prominent issues associated with the trustworthiness, equity and inclusivity of AI-based systems. In this paper, we analyze the extent to which gender-based bias affect the AI algorithmic discrimination in AI markets by studying the effects on the market and how biased algorithms affect market performance and decision-making processes in decision making [5], [6]. That tension on the one hand, between what AI technology can do and what algorithmic discrimination can do bad, is an open one and the study is there to shed light on it. It also emphasizes the need to examine the ethical implications and consequences of the deployment of AI. Thus, the study aims for an overall systematic literature review of gender-biased AI systems, quantification of bias related to market outcomes and healthcare outcomes, and some concrete and realistic actions to mitigate algorithmic discrimination and maximise AI systems benefit [7],[8],[9].

 

A holistic framework is also provided, outlining essential and future approaches for achieving equitable and inclusive AI advancements. As such, the current study highlights the need for ethical governance mechanisms in AI, design transparency, along with inclusive policy frameworks, to protect marginalized groups from AI discrimination [10], [11]. The paper will move the dialogue about responsible AI and fair tech innovation forward, guided by the gender, tech, and bias frameworks and literature existing. The research, policymakers and practitioners of AI systems must work out their methods in a way that makes AI fair, accountable, and socially inclusive [12], [13], [14]. This study also clarifies a number of pressing areas for research dealing with AI in areas like health care and finance, where algorithmic bias in performance is found and reinforces inequality [15], [16], [17]. This study contributes to supporting even more inclusive AI tools that benefit the broadest segments of the population.

 

Overview of Algorithmic Discrimination and Gender Bias in AI

Artificial Intelligence (AI) is embedding into its manner of work and is used in decision making for various sectors: healthcare, commercial banking, credit, education, economics, construction, finance and employment. Although deploying AI to such decision-making tasks improves efficiency and decision-making functionalities, it also raises many concerns (e.g.: algorithmic discrimination by existing algorithms, gender bias, etc.) [1,2]. As industry is adopting AI-driven systems more than ever, with AI-generated learning and applications, algorithms gradually are amplifying the social ills of discrimination while the bias was brought to a point where women and other minorities are being found in training datasets, causing discriminatory results regarding different people, [3], [4] to be emphasized. In this study, we investigate how gender bias is perceived in AI systems, and the results in the market dynamics, employment availability and supply side [5], [6].

 

This study addresses implications based on the intersectionality of algorithmic decision-making, and system disparities that favor certain demographics. Often biased algorithms are inadvertently benefiting the winners of a race; they further entrench and exacerbate existing social and economic inequality. This article presents the following aims, first, a systematic literature review of algorithmic discrimination and gender bias to contribute to an understanding of algorithmic discrimination and gender bias, second, empirical cases to find examples which can be interpreted as cases in point to show if biased AI systems can also be used in real life in the various sectors, and third, to discuss how to reduce discriminatory results from responsible AI implementations and test case [7], [8]. For this reason, it presents the nexus between gender with respect to algorithmic discrimination and emphasizes the significance of fairness and diversity for the construction of AI [9], [10].

 

However, the significance of this publication needs to be recognized for its practical application as well as academic argument. Gender discrimination in AI highlights that it is crucial to address so as to inform the implications for policy for research, academia, industry when faced with gender discrimination in AI, such that public policy should be provided with direction about how to create transparency, accountability and ethics in AI governance structures so that they can be designed within an open and transparent AI governance policy framework [11], [12]. Further, combating algorithmic biases can lead to a fairer competition in economic and labour markets, and an equal opportunity of success across a wider range of people without adverse social and economic impacts [13], [14]. In conclusion, this chapter prepares for understanding the larger context affecting challenges of gender bias in AI systems. Thus, future AI could be as bad, and thus the need for aggressive actions must be made, ethical regulations and ethical AI practices to maintain technology on one side of egalitarianism instead of promoting discrimination [15], [16].

LITERATURE REVIEW

Artificial intelligence (AI) has been a prominent facet of recent society, influencing areas such as health, finance, education, the world of work, and governance. Although AI technologies bring many benefits such as automation, efficiency, and decision-making improvements, their adoption poses urgent issues of algorithmic discrimination, gender bias [1], [2]. In particular, biased data-based algorithms are capable of replicating and amplifying existing social norms and inequalities in the actual recruitment, financial services and healthcare recommendations [3], [4], all of which harm women and marginalized groups of the general public.

 

There are anxieties that question the belief that AI systems are neutral, objective technologies. Gender bias in AI is further reflected in the social, economic, and cultural contexts used to design and implement the algorithms, as previous studies highlight [5], [6]. Biased training data, the absence of diversity in the design teams, and opaque algorithm structures [7], [8] are the underlying mediating features for biased outcomes [7], [8]. Whereas the majority of the researches emphasizes the negative impact of biased AI, relatively little is known, but how discrimination's social dynamics are created and maintained in the social hierarchy [9], [10].

 

Transparency, accountability and ethical quality of the design and policy of its use by designers and policymakers on the application have been considered as crucial to reduce the application of discriminatory strategies applied in AI [11], [12]. Solutions suggested are algorithmic transparency, fair and representative data, fairness-aware machine learning models and public knowledge on AI technologies. Despite this, the practical applicability of these effects is highly underinvestigated [13], [14].

 

Simultaneously, talk about the effects of AI— heightened productivity, efficiency and innovation—frequently pays little attention to the imbalance in the benefits across genders [15], [16]. Feminist and social justice literacies are increasingly finding their way into the treatment of the intersection between gender and AI tech. Feminist theories have argued that technology systems are shaped by social structures of power and that they may reproduce gender inequality by ignoring inclusivity in the development process by way of the system of practice during development [17], [18]. Social justice frames also challenge the premise of equal status equity in algorithms and emphasize bias in the data, the biased design process by means of the policy practice, biased data collection can reinforce structural inequality [19], [20].

 

These perspectives also underline the importance of gender-sensitive methods in AI governance, as well as AI policy-making. Similarly, the literature also demonstrates how intersectionality can contribute to understanding algorithmic discrimination. Gender bias in AI is not a black or white issue and cannot be resolved alone, as algorithmic outcomes are affected by race, socio-economic status, ethnicity and geography [21], [22]. This broader perspective also serves to reveal how a diverse minority reacts to discrimination caused by AI in context and underscores the need for fair and inclusive tech models. Methodologically, studies on algorithmic discrimination are conducted using qualitative, quantitative, and mixed data methods. Qualitative research (interviews and case analyses) can enable the investigation of personal experiences of people involved and used with biased AI systems [23], [24]. This research makes it possible to characterize these differences in quantitative data analysis and large datasets to evaluate the extent to which algorithmic outputs vary among populations [25], [26].

 

Due to their empirical nature, mixed methods that offer statistical evidence alongside socio‑cultural and contextual analysis (i.e., a more holistic understanding of algorithmic bias and not a unilateral examination of algorithmic bias) has been found to be especially powerful [27], [28]. Another major theme in the literature is the duality of AI technologies. Meanwhile, AI may enhance efficiency, save costs and enhance decision making processes [29], [30]. Conversely, the absence of ethical supervision, designing for inclusivity, and accountability with AI systems may reproduce gender inequity and prevent marginalized groups from benefits from technology [31], [32]. This tug-of-war between innovation and discrimination emphasizes the call for responsible AI development and regulation. This literature also suggests that algorithmic discrimination is not just a matter of hardware but social, ethical, and political issues too [33].

 

Workers, developers, and practitioners need to come together to support a level playing field of AI governance that is characterised by clear, inclusive and responsible, transparent oversight and governance mechanisms (for instance, [33], [34]). Advancing the AI diversity in the AI development team and embedding ethical considerations in the algorithm design tends to be more commonly considered ways to neutralise bias and enhance fairness [35], [36]. Inter alia, these pieces demonstrate that algorithmic bias in AI markets is a significant influence issue and a pervasive threat to gender equity and social justice. While there has been significant progress in the literature that focuses on bias detection in AI systems, more research in the field is necessary to determine effective strategies of mitigation, their long-term impact, [37], [38]. Over the next few years, we will pay attention to the policy-social context and the issue of inclusive governance in the social context of AI technology and towards the establishment of humane and fair AI systems, which are in good stead of all social groups [39], [40].

 

METHODOLOGY

The rapid spread of artificial intelligence (AI) technologies in the fields of healthcare, finance, recruitment, and education has brought to the fore concerns on algorithmic discrimination and gender bias in AI-based systems [1], [2]. This study considers the dual role of AI, because AI can promote efficiency and decisions and market performance; the double-edged sword of AI can also lead to the reproduction of existing gender discrimination when AI and biased algorithms and datasets are exploited [3]. This research will mainly explore the mechanism through which gender bias is sustained by the AI system, but also explore how AI contributes positively to the market as a whole as processes and the organizations are more and more conductative as well [4]. Hence, in order to fulfill these objectives, the research is a mixed-methods study using qualitative and quantitative methods [5]. These considerations are undertaken to create an overall understanding of the relationship between gendered processes and algorithmic decision-making processes. In a qualitative realm, I incorporate semi-structured interviews and case-analytic approach on the lived experiences of individuals exposed to the gender bias of AI software. These interviews provide a context for understanding the phenomenon of marginalized people being discriminated against, in the context of AI [6], [7]. We also conduct algorithmic performance analysis by conducting statistical data work on AI performance, recruitment systems, healthcare suggestion, and market effect datasets. To test the algorithmic results of different gender categories, data from multiple reports, company case studies, and publicly available AI datasets are accessed [8]. A quantitative analysis can be useful to determine the patterns of bias, unequal treatment, and discrimination created by AI system. This reading draws heavily on feminist theory and critical data studies in both efforts to understand the relevance of marginalized perspectives in technological research and policy discussion [9]. The study thus serves to reconcile as well and multi-dimensional the quantitative evidence (the numbers, the numbers) with some qualitative accounts on algorithmic bias. The combination approach is significant because this method of combining quantitative data with the qualitative testimony makes the findings both more reliable and more credible [10]. At the same time, this methodological approach is significantly more than academic studies. This paper offers guidance toward both policymakers, researchers and technical managers interested in ending gender bias in the AI ecosystem and promoting fairness in AI systems that is relevant also to these actors [11]. It enacts the growing discourse surrounding ethical AI governance, transparency and accountability as they relate to technology innovation [12]. More broadly the approach is a stepwise framework of analysis that explores those social, ethical, and economic determinants of gender bias present by AI industries. The current paper also focuses on assessing the threat and reward, to AI technologies in context and for generating an equitable, inclusive and responsible development of AI systems across sectors based on the researched interests [13], [14], [15]. Research Design As soon enough that Artificial Intelligence (AI) is changing the game for areas like healthcare, banking & finance, recruitment, HR & education, it is transforming the healthcare, recruitment, education, and a myriad of business & finance sectors at a rapid pace. The growing application of AI systems, though potential for efficiency and decision-making gain, has led to the proliferation of systems and applications having some very alarming implications of algorithmic bias and gender bias [1], [2]. In this study, we explore which AI systems contain gender biases and how gender bias is found from within and its effect on AI system fairness and equality in markets. In addition, this research explores the benefits from designing and implementing AI technology that is ethical and inclusive, and considers to an extent its influence [3]. This research mainly aims to analyze ways from which algorithmic discrimination occurs and then to compare the influence of AI applications on gender inequalities across the various sectors [4]. In the study, researchers compare AI systems adopted from finance, health care and hiring to look for signs of bias and assess any effect on female decisions and openings of choice. The research also calls out best practice and action steps in order to advance equity, inclusion and responsible AI management. This research takes a mixed methods research design by integrating qualitative and quantitative methods [5], [6]. Interview, case studies, and narrative are used to gather the views and experiences of those affected by AI decision-making. Such techniques offer valuable glimpses of the social and ethical dimensions of algorithmic socialization/bias at the levels that might not even be covered by numbers alone [7]. Quantitative techniques are applied to the dataset to investigate patterns of discrimination and to determine inequalities in AI-generated results among demographic classes [8]. Methodological triangulation is applied to ensure reliability and validity of the research. This method allows a nuanced analysis of AI systems, as they may either reproduce or curtail gender inequalities in the marketplace. Moreover, it draws on the insights of feminist and social justice scholarship, which stresses the need for the involvement of marginalized participants in discussions about technological progress and regulation [9], [10] as well as the necessity for an integrated study design and practical consideration (Hassan, 2024) in developing sound policy frameworks of AI governance [11]. To ensure that all professionals, and scientists, understand and appreciate the contribution, they focus on these strategie Existing evidence on discriminatory practices The results should support continuing deliberations on responsible AI development [14], as well as contribute towards the development of fair, impartial, and socially responsible AI systems in various fields [15], [16], [17], [18], [19], [20].

RESULTS

As we see, the gender gap in AI systems running on machine learning technology is highly significant both nationally and even more so worldwide, and is present in employment, health care, finance, as well as automated decision-making as well as other fields of application. As the research suggested with regards to structural systems, most of the time they reflect societal biases and these algorithms are used to reinforce inequalities that target both minorities and women. In practice, these biases must continue to be addressed as research has pointed out, specifically in AI recruitment tools like AI, where the bias in favor of male candidates to female candidates increases. All the data and algorithm outputs have been quantified according when training data with unequal models and training practice are ranked by race, and have been found to have an impact on unequal decision making process including unequal training data and non-inclusive development practices. The results show that AI systems not having such gender-sensitive evaluation schemes would likely aggravate discrimination and to exacerbate deeply entrenched social inequalities.

 

These findings underscore the relevance of fair training data, transparent algorithmic formulation, and fairness-forward AI governance. The awareness of AI systems gender bias was advanced through qualitative interviews with industry professionals and stakeholders but not through industry/stakeholder interview data. But most businesses were ill-prepared to take corrective action or coordinate ethical frameworks. The gap between what is known and what is able to work speaks to larger institutional issues with systems-wide institutional debates about the governance and accountability of AI. Furthermore, as a result of this study the work also validated that inclusive development teams, inclusion and AI policies ensure that diversity in development teams and AI allows organizations to reach fair, comparable outcomes.

 

Even more significantly, the creation of algorithms built on information input from a variety of sources substantially lowered the chances of bias occurring in automated decisional systems. As a result, these results discredit many people’s view of machines as neutral and objective systems. More profoundly, the algorithm is affected by the social environment in which it is developed, its social background, the values and biases and constraints in which it is produced and operates. Thus, an emerging need for strong ethical framework, guidance on AI practices, and frameworks of accountability and inclusivity in AI markets as in this research too. Moreover, the results also unveil the socio-economic facets of algorithmic discrimination as a whole. AI-based gender bias not only widened discrimination against women who had felt disenfranchised, but it also reduced job opportunities, access to healthcare and perpetuates systemic inequalities within our population.

 

This study explains where algorithmic bias eats the largest bite out of the pie, and may be of help to policymakers, researchers and developers trying to build fairer AI systems — spaces where any bias can be insurmountable. The combined conclusions indicate that responsible AI is under renewed scrutiny and, consequently, renewed attention to responsible AI design & algorithmic accountability. The study also shows that AI technologies may help deepen or mitigate discrimination, at the same time as increasing diversity, depending on development, implementation and oversight. This discovery opens potential research directions for future work in the areas of equity of gender role of AI, ethical AI management and creative technology development.

 

That's a sign of bias-induced numbers: Gender bias, race-based bias, gender imbalance in AI use and bias arising from failure to include women in development teams for AI. The data also indicated ongoing inequalities in AI systems which are influencing outcomes in technology, approaches in decision-making, and unequal access to this technology across varied sectors.

 

Analysis of Algorithmic Discrimination Patterns

The ongoing use of Artificial Intelligence (AI) systems in many fields poses an obvious risk of algorithm-induced discrimination and systemic gender biases. This study shows that AI-powered technologies can indeed reproduce social inequality — in sectors like employment, credit scoring, healthcare and policing. Many also use historical datasets with gender-driven bias or negative decisions for women or marginalized communities. The latest data shows that AI is capable of highlighting male candidates' characteristics for algorithms but that results in women being under-represented in recruitment processes and other automated decision making processes. The very same results were evident in financial and health use-cases whereby biased data sets were used to apply biased data to bias the use of datasets associated with inequality and access limitations against women. The algorithmic discrimination in these data leads us to believe that algorithmic bias is systemic rather than individual, which are echoed in broader social and institutional inequalities.

 

To substantiate these findings, qualitative interviews with practitioners also formed the basis of the data analysis. Most participants expressed concerns that the population involved in collecting the data, building algorithms, informing decision-making and design teams could be more diverse. Not all data or systems design with nondisclosure in mind and not all non-inclusive development environments have been described for non-disclosure of hidden biases in datasets or algorithms as per the observations of some people. That lack is critical in yielding an outcome of which is unfair — or even discriminatory — — especially when discussing AI, and in many cases the outputs of all kinds of artificial intelligence; in many cases, the answer is because no one on the outside engages in AI systems or any of the AI, let alone with the outputs; for that reason. The study underscores also the need for using intersectionality to think systematically about algorithmic bias. Gender bias typically overlaps with other, similar social determinants (such as race, ethnicity and socio-economic factors), which increase inequalities that AI–based systems need to address.

 

The findings highlight for both AI developers and policymakers the need to explore expanded, more inclusive frameworks for assessing fairness and accountability in systems. And, the research also indicates organizations that have mixed-trained AI development teams are less likely to build biased (and therefore fairer) technologies. It is through designing and testing that inclusive teams enable a variety of insights, experience, and tools to be used; this allows discrimination trends to be identified, while also improving the transparency of algorithm development. To me, diversity and ethical constraints are our 2 biggest fundamental cornerstones for responsible building AI. Those results have real-world implications for policy makers, tech entrepreneurs and industry. The study highlights the ongoing need for ethical AI governance frameworks in the context of data practices and mechanisms for accountability to address algorithmic discrimination, especially for systems of AI systems. For creating a more just, fairer, more equitable, and socially inclusive world, we need to build AI technologies whose results on our social realm and life cycle become automatic just because we become accustomed to it. In this piece, this article implies that algorithmic discrimination is a technological and social problem. If left ungoverned, and if accompanied by a platform that doesn’t cater for everyone, AI systems could end up amplifying rather than reducing existing disparities. Thus, systematic R&D on policy-making reform, ethical AI development, and the potential that can contribute to a more equitable and inclusive technological system for the better.

 

CONCLUSION

This investigation looks at the crossover between algorithmic discrimination, gender bias and AI benefits in modern markets. Previous research has indicated that AI systems replicate these social inequalities, disproportionately affecting segments of society who are disadvantaged in recruitment, healthcare and finance - particularly women and marginalized groups. Similarly, research demonstrates how AI can make processes more efficient and decision-making when implemented and designed responsibly and in a participatory approach.

 

Gender-sensitive as well as ethical AI implementations and AI development should pursue this, since the scholars illustrate that the adoption of AI and AI software development can and do need to incorporate gender-sensitive and ethical AI. Findings such as these grew out of data from biased datasets, lack of diversity among its development team, and lack of accountability. Instead we need policymakers, technologists and scholars to collaborate in a coalition of resources to build open, inclusive and just AI systems.

 

The research also emphasized requirements for stronger rule-making and ethical governance of AI legislation and harmonisation and alignment of frameworks that avert inequalities in AI's decision-making through regulation. There will need to be more studies of what AI will look like in the labour market, in healthcare and in education and in social equity too. A third topic could be further considered a larger societal responsibility to create AI in a way that benefits society more equally by distributing the benefits of AI more equally across society. Together we enrich the literature on responsible AI and gender equity at a more fundamental level by showing that AI technologies are not inherently neutral. In particular, their impact on human life is closely linked to the way they are shaped, constructed and regulated. Facilitating equitable and inclusive AI systems also contributes to equity, accountability and social justice in tech development, which improve holistic wellbeing.

 

REFERENCES

1. P. M. P. Bernal and J. C. Veliz, “Gender Bias in Generative Artificial Intelligence: A Literature Review on Perspectives and Implications,” 2025. 2. Y. Zhu, “Systemic Bias in Artificial Intelligence: Focusing on Gender, Racial, and Political Biases,” 2024. 3. S. González and L. Rampino, “A Design Perspective on How to Tackle Gender Biases When Developing AI-Driven Systems,” 2024. 4. S. O’Connor and H. K. Liu, “Gender Bias Perpetuation and Mitigation in AI Technologies: Challenges and Opportunities,” 2023, doi: 10.1007/s00146-023-01675-4. 5. Caliskan et al., “Mitigating Gender Bias in Machine Learning Data Sets,” 2020. 6. M. Doh and A. Karagianni, “‘My Kind of Woman’: Analysing Gender Stereotypes in AI through The Averageness Theory and EU Law,” 2024. 7. G. Nino and F. Lisi, “Exploring the Question of Bias in AI through a Gender Performative Approach,” 2024. 8. Y. S. J. Aquino et al., “Practical, Epistemic and Normative Implications of Algorithmic Bias in Healthcare Artificial Intelligence: A Qualitative Study of Multidisciplinary Expert Perspectives,” 2023, doi: 10.1136/jme-2022-108850. 9. L. Nazer et al., “Bias in Artificial Intelligence Algorithms and Recommendations for Mitigation,” 2023, doi: 10.1371/journal.pdig.0000278. 10. D. Cirillo et al., “Sex and Gender Differences and Biases in Artificial Intelligence for Biomedicine and Healthcare,” 2020, doi: 10.1038/s41746-020-0288-5. 11. Karagianni, “Gender in a Stereo-(Gender) Typical EU AI Law: A Feminist Reading of the AI Act,” Cambridge Forum on AI: Law and Governance, 2025. 12. G. M. Andal, “School Expansion and Performances of Public High Schools in the Province of Laguna,” International Journal of Research and Innovation in Social Science, 2024. 13. Jain et al., “Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias With Use of Health Care Algorithms,” 2023. 14. H.-F. Cheng et al., “How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions,” 2022, doi: 10.1145/3491102.3501831. 15. R. S. Baker and A. Hawn, “Algorithmic Bias in Education,” 2021, doi: 10.35542/osf.io/pbmvz. 16. R. Wang, F. M. Harper, and H. Zhu, “Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences,” 2020. 17. J. Kleinberg, J. Ludwig, S. Mullainathan, and C. R. Sunstein, “Discrimination in the Age of Algorithms,” 2019, doi: 10.3386/w25548. 18. D. Baidoo-Anu and L. O. Ansah, “Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning,” Journal of AI, 2023, doi: 10.61969/jai.1337500. 19. S. A. Alowais et al., “Revolutionizing Healthcare: The Role of Artificial Intelligence in Clinical Practice,” BMC Medical Education, 2023, doi: 10.1186/s12909-023-04698-z. 20. V. Hassija et al., “Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence,” Cognitive Computation, 2023, doi: 10.1007/s12559-023-10179.

Recommended Articles
Original Article
An Analysis of Impact of the Government Schemes for Women Welfare to achieve Sustainable Development.
Published: 08/05/2026
Original Article
Beyond Borders: A Multi-Tiered Framework for Youth-to-Professional Startup Visa Pathways.
...
Published: 07/05/2026
Original Article
The extent of constitutional oversight in protecting human rights in Jordan.
Published: 07/05/2026
Original Article
Operationalising AI Legal Governance: A Regulatory Compliance Framework for AI Systems.
Published: 07/05/2026
Loading Image...
Volume 7, Issue 1
Citations
33 Views
26 Downloads
Share this article
© Copyright Journal of International Commercial Law and Technology