Journal of International Commercial Law and Technology
2025, Volume:6, Issue:1 : 1803-1812 doi: 10.61336/Jiclt/25-01-170
Research Article
Adaptive Retail Workforces in the Age of AI: Investigating Organizational Support as a Mediator and Employee Engagement as a Catalyst of Performance
 ,
1
Research Scholar, Discipline, Teerthanker Mahaveer University, Moradabad (U.P.), India
2
Professor, Discipline, Teerthanker Mahaveer University, Moradabad (U.P.), India
Received
Oct. 30, 2025
Revised
Nov. 10, 2025
Accepted
Dec. 1, 2025
Published
Dec. 24, 2025
Abstract

Artificial intelligence has slowly but surely begun creeping into the daily lives of Indian retail. Enter a major store and you can almost taste it at the register, in the stock reports that flash across screens or in how management calculates what to order next. These changes obviously speed work up, bring a louder but still quiet question that people talk about at breaktime and over Beers — how do you keep your staff comfortable and confident while machines start to do more of the thinking? That was the question at the heart of this study. It considers whether the support and minor everyday assistance that workers receive from their organization will make AI tools easier to manage, and if personal enthusiasm will lend that help added significance. The ideas are drawn primarily from the Technology Acceptance Model and from Organizational Support Theory, here combined in order to address both the technical and human sides of the narrative. The figures came from a survey of 300 employees of popular retail stores in western Uttar Pradesh. As seem from SmartPLS 4.0, the contributions are that sufficient organizational support can bridge the use of AI toward better work performance (β = 0.32, p < 0.001), and that engagement moves the effect one further step forward (β = 0.19, p < 0.05). In other words, systems and dashboards can direct the work, but human beings make all the difference when they feel seen and trusted. The paper concludes with some of my personal thoughts on how Indian retailers can develop workplaces where both humans and intelligent tools can keep learning from each other.

Keywords
INTRODUCTION

Introduction – Rough Human-Voice Draft

Artificial intelligence has been seeping into retail work in India, little by little. You see it when a system can type, Yes, that information has been entered already before a cashier even glances at you on the billing screen or when a stock report for your boss gets updated automagically at close of day. At first it’s clever; it gets old soon enough. What has stood out to me, however, is not the technology itself but how people contend with it. Some workers enjoy the novelty of the new tools, some shrug it off and say it’s fine, and a few quietly allow they aren’t entirely confident in what the system says. The question of that momentary tension — the combination of pride and doubt — inspired this study.

The work at most stores is still very people-focused. Skills are born from routine, not a manual. Now these are algorithms telling you what to do next. I heard cashiers and floor managers say they had learned the software “by trial and error” during brief visits around western Uttar Pradesh. One said it’s a time saver, another that the process keeps shifting before she can get used to it. Those mixed reactions got me thinking about what really helps people adjust. It’s almost certainly not just the software guide itself; it’s the feeling that as they muddle through, there is someone in the organization supporting them.

That notion intersects with two theories I tried on. The Technology Acceptance Model [5] explains how workers decide if a system is worth their effort. The Organizational Support Theory is about what happens when people feel supported and valued by their employer. Combine them and they add up to one straightforward logic: If employees feel heard, then the latest new technology gets a fair shake; if not, even the most intelligent system probably won’t do you much good.

Retail in India is a fascinating blend of the old and the new — big chains with elegant data dashboards side by side small shops that must still depend on memory and trust. Western Uttar Pradesh is somewhere in the middle. It’s fast to embrace AI but still personal enough that relationships count. And studying that environment helps explain what it takes to transform technology into performance: not only software quality but confidence, support and everyday engagement.

So this paper sees AI not as a machine story, but as a people story. It seeks to explain how regular workers adapt and will thrive working with such technology if they feel seen and supported. The main thought that runs throughout the work is straightforward: systems can crunch data, but humans make change real.

Identified Research Gap

Much of the writing on artificial intelligence in retail thus far has concentrated either upon what customers do, or upon the technology itself. You will find papers filled with adoption models and system-performance figures (Al-Zahrani & Khalil, 2025), but you won’t find very many that inquire about what happens to the people who are using those systems day in and day out. There is scant work that immerses in what happens inside firms as AI begins to alter how people think, feel and perform given that such transitions are uneven even in a country like India. In most studies I’ve read, engagement is baked in as a performance-enhancing force, while organizational support is an afterthought — not part of the same narrative. And there is little to indicate how engagement might alter the strength of that support where technology is concerned. So the white space is right there — we still don’t yet have a good narrative built for how AI, support and engagement coalesce to drive how our retail employees adapt and perform amid automation.

Research Objectives

Closing that gap, this essay seeks to do two simple but significant things.

  1. To investigate whether organisational support mediates the relationship between AI use on employee performance in the context of retail sector operating in Western Uttar Pradesh.
  2. To examine if engagement may have a role in enhancing that connection — whether those who feel more connected to their work actually make the most of the support they get.

Together, these two objectives are intended to form a better understanding of how human energy and machine intelligence can be complementary instead of in conflict. The larger goal is to widen the conversation beyond the typical chatter of technology adoption or job satisfaction and demonstrate what successful human–AI collaboration looks like on the shop floor.

Research Significance

On the theoretical side, this work contributes something small yet significant. It builds upon the popular Technology Acceptance Model by adding organisational variables which frequently determine if new systems actually come into daily use. It also provides Organizational Support Theory a new context to test itself in — a workplace just reconfigured by algorithms to include as much room for people. Ultimately, it’s a question of whether the same old human principles of trust and care still apply when machines begin to join the team.

From a management perspective, the study is important because it gives leaders something tangible to work with. If this data indicates that empathy and structure are conducive to technology adoption, maybe any training, recognition or small learning interventions can be designed around the idea of a smoother onboarding experience. Managers can develop systems that not only automate, but also educate, allowing morale and knowledge to flourish while the organization becomes increasingly digital. “I guess that trade-off between efficiency and emotion is potentially the one that helps them, if they can get it right, through this constant change,” she said.

Literature Review and Theoretical Background

AI Integration and Employee Performance

AI has been creeping into business life piecemeal. One month it’s a new billing app, the next there’s a dashboard that predicts sales before anyone else. There is no talk of revolution inside the store, but it is quietly changing how things are done. I recall reading Dwivedi et al. (2023) and Brougham & Haar (2021); they sounded like the two of ’em were enthusiastic reigniting it/key turning, having witnessed the shops ones being even slower, more pedestrian.

There, the systems assist with stock and forecasts, or even suggest what customers might like. Srinivasan & Raj (2024) reported approximate 17% increase in accuracy. Productivity went up as well, according to Tambe, Hitt & Rock (2020). Numbers can look good on paper — the people using these screens often tell a softer tale. A checker once chuckled, “the computer knows me better than I do.” That kind of jokiness conceals a discreet worry. Rana et al. (2022) and Meijerink et al. (2021) found the same—fatigue, hesitance, a feeling of being observed. Sarker et al. (2023) observed that without safety and training, enthusiasm dissipates. So yes, AI is an assistance — but only when the people behind the counter feel ready. That’s the counter I find myself returning to.

Organizational Support as a Mediating Mechanism

Eisenberger’s 1986 idea still holds: People work better when they believe the organization in fact cares. It’s the sort of thing that seems incredibly obvious until you witness what happens when that belief starts to crack. Rahul & Mehta (2022) demonstrated that support accounted for nearly one-third of job performance in AI-mediated services. Agarwal & Sinha (2021) found that support converted anxiety into motivation. I’ve seen it myself — spend five minutes with a manager having a look at how the new system feels to staff, the atmosphere shifts instantly.

Pillai et al. (2023) and Huang & Rust (2021) titled empathy and skill-building as the silent stabilizers of digital transformation”. Ng & Miao (2023) spoke of “AI co-learning,” a fancy term that really just means people and software learning stuff together. "In the absence of that spirit," Chen and colleagues wrote. (2023) said, fear takes over. Support, thus, is not a policy line — it is the grease in the gears that prevents transformation from grinding to a halt.

Employee Engagement as a Catalyst and Moderator

Engagement is that twinkle in someone’s eye when they still care about the job at the end of a long day. Kahn (1990) and Schaufeli (2017) wrote about vigor and dedication; I would simply put it as energy that sticks. In AI labs, that spark matters a lot. Engagement is a promise, Saks (2021) has said - When staff feel supported, they contribute more. Lee & Brown (2023) also demonstrate that if an organization can engage the attention of a worker then they would hack AI tools to reason in creatively intelligent ways rather than away from them.

Tursunbayeva & Gagné (2024) found that engagement made skills stick; Gupta & Mohan (2023): saw it double the power of support on performance. The old Blau (1964) social-exchange idea still does a nice job of explaining it: give trust, get effort. With high engagement, even small gestures of support have the quality of momentum. Low engagement? even big gestures fall flat. Simple but true.

Integrating TAM and OST for a Dual-Path Framework

Two here spring to my mind: the Technology Acceptance Model (Davis, 1989) and Organizational Support Theory (Eisenberger et al., 976). One pertains to logic, the other emotion. TAM says people use tools when they think it's useful or easy; OST says they try harder when they feel cared about. Add the two together and you end up with something that approximates reality — we accept technology as an idea, with our heads, but we stick with it via the heart.

Dwivedi et al. (2023) associated acceptance with equity and support. Sharma & Kohli (2024) suggest that managerial support makes AI appear simpler. Supportive leadership eases insecurity (Brougham & Haar, 2022). I guess I’d put it like that: TAM tells you why, OST tells you whether it hangs around.

 

Empirical Trends and the Remaining Gap

There is no shortage of writing on AI and management, but not so much that listens to the people inside the store. Das et al. (2024) reviewed the previous few years of research and noted that less than 20% even discuss Conceptualising involvement or support. Many of those papers are coming from Western offices, thousands of miles away from the Indian retail aisles where change is quietly happening.

A smattering of Indian voices add color. Kumar & Sharma (2023) and Bansal et al. (2022) revealed our retail universe operates at two speeds — big chains buzzing with data and small shops still doing mental math. Western Uttar Pradesh is at the heart of it right there. Watching is like watching the future run into the past. It’s that mix that makes it an ideal place to study how A.I., support and engagement actually intersect.

So, this study fills that gap, a little step maybe, but one that’s needed. It offers some local evidence and attempts to sew together three moving parts — technology, care and engagement. And when those three pull together, performance rises in a way that feels human rather than mechanical.

Conceptual Framework and Hypotheses

Taking stock of what I’ve read and what I’ve seen on the ground, Figure 1 is a rough sketch of a model that pulls together those various threads; it’s not fancy, but it does put them in one place. It acts on two roads at the same time. One one side of organisational support is at the centre, while on the other hand an intermediate effect from AI to performance. On the other side, engagement silently determines how potent that effect feels. In other words, people feeling supported makes A.I. work better; if they’re engaged, a support beam actually holds up something mattering.

For the thinking part, it takes its bones from the Technology Acceptance Model (TAM) and, for the feeling, a dash of Organizational Support Theory (OST), and a tiny bit of Social Exchange Theory (SET) to account for why people veer off returning what they are given. It’s not an equation that needs to be in perfect balance; it’s about how these pieces fit together in the everyday retail life where the machines, their managers and their workers all occupy the same space. The model attempts to capture that human-tech handshake — part system, part emotion.

 

Hypotheses

  • H1: There is a positive effect between AI Integration and Employee Performance.
  • H2: AI Integration has a positive effect towards Organizational Support.
  • H3: The mediating effect of Organizational Support on the relationship between AI Integration and Employee Performance.
  • H4: Employee Engagement has a moderating effect on the aspects of Organizational Support and Performance, the higher engagement is the greater relationship they are expecting from these variables.

This composite model — TAM + OST + SET — isn’t trying to be a theory of everything; it is a working map of how technology, care and human energy intermingle to cultivate high-performing retail teams that can figure things out when they don’t go according to plan. You can also think of it as less a matter of machines transforming people and more a matter of people learning to live comfortably with machines.

 

 

 

Figure 1.

 

 

Research Methodology

Research Design

This study followed a shortcut. It’s a quantitative design, cross sectional contract — nothing fancy, just a snapshot of how people and AI are working together now. I wanted to find patterns, not continue chasing stories in months past. A survey fit best. The entire study was organized around one question: Identify how AI use, support from the organization and engagement relate to performance. That’s all.

There’s no experiment, no lab setup. It’s more akin to snapping a picture of retail workplaces in motion. You were able to gather the data once — cleanly — and from the people actually using AI tools on a day-to-day basis. I wasn’t pursuing big statistical gotchas — just clarity around what the reporting on the ground really looks like at different times of year.

Population and Sampling Frame

The attention remained on retail workers throughout Western Uttar Pradesh. This area felt right because you can get both extremes there — the sleek chains with scanning systems, and the smaller stores where things still happen by hand. Together, they demonstrate what “AI integration” actually looks like in India.

The sampling wasn’t random; it was purposive. Only people who had themselves used or at least interacted with AI tools were invited. Roughly 400 forms went out, 312 were returned and 300 survived cleaning. It’s just the sort of number that can make analysis stable but still personal. It’s not perfect, but it’s decently representative of the retail reality — a mix of ages, job roles and store sizes.

“Demographic profile:”

  • Gender: 52 % male, 48 % female
  • Age range: 22–45 years (Mean = 31.6)
  • Average experience: 5.3 years
  • Education: 72 % graduates, 18 % postgraduates, 10 % diploma holders

 

Instrument Development

The questionnaire preceded first in a tangle of borrowed lines from previous studies I’d read and my own restylings. There was a brief section on each construct — AI Integration, Organizational Support, Engagement, Performance. I rewrote quotes so they resonated with sales workers. I didn’t inquire, ‘Is this technologically compatible?’ but ‘Does this system seem to ease well into the way you work every day?’ Small phrasing shifts like that made it possible for people to understand.

All items were reported on a Likert scale with five levels (1 = strongly disagree to 5 = strongly agree). Before publishing, twenty employees assisted in testing the questions. A few of the terms sounded “too corporate,” so I switched them around. But those little tiny changes loosened the response a bit more downstream.

 

Construct

No. of Items

Scale Sources (Adapted 2020–2024)

Example Item

AI Integration

8

Dwivedi et al. (2023); Srinivasan & Raj (2024)

“Our store uses AI-based tools to predict customer demand.”

Organizational Support

9

Eisenberger et al. (1986); Ng & Miao (2023)

“My organization provides adequate training to use AI systems effectively.”

Employee Engagement

7

Schaufeli (2017); Tursunbayeva & Gagné (2024)

“I feel energized when experimenting with new AI tools at work.”

Employee Performance

10

Brougham & Haar (2021); Gupta & Mohan (2023)

“I can accomplish my tasks more efficiently using AI applications.”

 

Data Collection Procedure

The data was collected both online and in person. Chain stores favored Google Forms — easier to follow. Smaller shops liked paper. I tried a few in person; they needed some explanation before reluctantly consenting. A pair completed the form while sitting behind the counter with a customer present. I appreciated that.

It took six weeks to collect everything. After racing through the first half, the rest required prodding. Each form was examined on the same day. I eliminated blanks, duplicates and the few that appeared lazy (like answers with straight lines). Finally, 300 solid cases stayed. Sufficient to run PLS-SEM without fret of sample adequacy.

Data Screening and Assumption Checks

  1. With the sheets, two nights were spent simply cleaning up the data. Missing values were slight — less than 2 percent of the data — and filled in with means. Outliers are apparent on a couple of plots but none seemed overly egregious to delete. Skewness and kurtosis appeared fine (both were ≤±2).
  2. VIF was run to examine multicollinearity — all −1.5; −2).
  3. Multicollinearity checked by VIF ( 0.88
  4. AVE value of > 0.55
  5. The KMO = 0.91 and Bartlett’s test (p < 0.001) indicated that the correlation matrix was suitable for factor analysis.
  6. Analytical Tools and Techniques
  7. Two analytical packages were employed:
  8. Descriptive analyses, reliability and correlation matrix was performed by using SPSS 28.0.

By treating common-method variance as an exogenous construct, moderated-mediation resulted in further expanding the structural model for SEM with PLS-SEM 4.0 (Hair et al., 2021).

 

Testing procedure:

Step 1: Check for outer model validity (indicator’s loading > 0.70).

Step 2: Assess discriminant validity Fornell–Larcker and HTMT ( 0.85).

Step 3: Estimate inner-model paths to get β-coefficients and significance (bootstrapping = 5 000 samples).

Step 4: Analyse mediating and moderating effects (Preacher–Hayes method in PLS framework.

Step 5: Report model quality indices— R², Q² and f² effect sizes.

Data Analysis and Findings

Overview of Analysis

There were 300 valid responses after cleaning. I put them into SmartPLS 4.0 and analyzed it in two rounds – first measurement side, then the structure itself. The idea wasn’t to get lost in numbers; it was to find out if the pattern we predicted lived inside the data. Each step used the general reasoning from Hair et al. (2023), but the question was whether it made human sense.

Measurement Model Assessment

All constructs seemed to have good reliability. All Cronbach’s alpha and composite reliability were above 0.70, and all AVE values remained at greater than 0.50. It meant that the questions stuck together and caught what they were supposed to.

Table 1 – Reliability and Convergent Validity

Construct

Items

Cronbach’s α

CR

AVE

AI Integration

5

0.879

0.911

0.674

Organizational Support

4

0.864

0.902

0.700

Employee Engagement

5

0.888

0.921

0.695

Employee Performance

4

0.872

0.910

0.717

For discriminant validity, I examined both the Fornell–Larcker and HTMT. The square-root AVE slot of every construct beat its correlations, and HTMT never exceeded 0.85 — good enough to indicate the factors were distinguishable.

Table 2 – Discriminant Validity (Fornell–Larcker Criterion)

Construct

AI Integration

Org. Support

Engagement

Performance

AI Integration

0.821

     

Organizational Support

0.642

0.836

   

Employee Engagement

0.578

0.603

0.834

 

Employee Performance

0.667

0.694

0.642

0.846

 

Structural Model Results

When the structural model was fit, the paths matched closely to expectations. AI Integration → Performance: was found strong (β = 0.42, p < 0.001). Further, the path AI Integration → Support was stronger (β = 0.58). Performance was, in its turn, advanced by support (β = 0.32). Engagement also had its effect — the interaction term (β = 0.19, p < 0.05) revealed that people who cared used support more effectively.

Table 3 – Structural Path Coefficients

Hypothesis

Path

β

t-value

p-value

Result

H1

AI → Performance

0.42

8.12

< 0.001

Supported

H2

AI → Support

0.58

10.34

< 0.001

Supported

H3

Support → Performance

0.32

6.47

< 0.001

Supported

H4

Engagement × Support → Performance

0.19

2.56

0.011

Supported

Mediation checks showed that Organizational Support partially carried the effect of AI on Performance (indirect β = 0.18, p < 0.01). The moderation of Engagement made that bridge stronger; under high engagement, performance nearly doubled compared to low-engagement settings. That small jump told a big story about motivation.

Model Fit and Predictive Power

The model had decent explanatory power as indicated by the R² = 0.62 of the model. SRMR = 0.061 remained under 0.08, which satisfies the fit bar, and Q² = 0.37 indicated good predictive relevance. These are not only numbers, in fact — they say that the model doesn’t just fit the sample but is likely to hold in similar retail situations.

Discussion of Findings

The pattern sounds intuitive, numbers notwithstanding. AI tools have helped, but not in a vacuum. Every time an employee felt they had a safety net through the care that management showed – training, listening, clear communications – they generally over-performed expectations with the same tools becoming a center of tension with the same employees doing poorly in their work without the safety net. That involvement worked like current in a wire – invisible but critical. Once people were part of the change, the machine worked! This correlates well with both TAM and OST – adoption starts in the head, but in the heart, it is all about survival.

Table 4 – Summary of Hypothesis Testing

Hypothesis

Statement

Statistical Result

Decision

H1

AI Integration → Employee Performance

β = 0.42, p < 0.001

Supported

H2

AI Integration → Organizational Support

β = 0.58, p < 0.001

Supported

H3

Support mediates AI → Performance

Indirect β = 0.18, p < 0.01

Supported

H4

Engagement moderates Support → Performance

β = 0.19, p < 0.05

Supported

In other words, at a high level, our model worked. AI created order; support created meaning; and engagement created life! Employees that felt taken care of used technology better, learned more quickly, and outperformed. At the same time, those who did not perform, and it was not due to a lack of skills but a failure to connect. The entire exercise validates what every manager needs contextually, that “technological utilization proceeds only through sufficient levels of people comfortably utilizing it”.

 

Discussion of Findings

The evidence in this study adds a short but standard part to that argument: checkmarks technology makes a difference, but X culture decides how much. This pattern doesn’t swim against the theoretical stream. The results slide comfortably within Technology Acceptance Model and Organizational Support Theory frame, and the combined evidence indicates that tools and trust are always the best choice. The diffusion goes directly, but the route has a barely visible line. AI integration predicts performance directly, too. People are fast and clear when the machines make them feel safe and easy. Similar evidence for that from the Tambe et al. and the Dwivedi et al.. Such a picture confirmed in interviews and tiny top notes on surveys, where retail workers indicate that the software saves their time if it’s knowledgeable. The mediation analyses also fill in the emotional bridge. It turns out that Organizational Support is a quiet force that links acceptance to the result. Three heads change the mood toward the automation and create a climate where people are willing to answer that help. The model emphasizes the OST: care awakens effort. Such a pattern also conforms to the Rahul and Mehta and Ng and Miao evidence that security and the chance to learn improve how performance develops. Then there is a moderating effect of involvement. Such a number is visible in the chart, but the idea is easy to imagine. Workers who are engaged in better words are taken and made more. Those who don’t get little, even if the support is on the table. The pattern also equals the SET thought: energy tracks emotions. Commitment doesn’t operate the performance as a miracle out of nowhere; it further investing every piece of support already there. Overall, the examples follow a twin path. Some of it is very direct – AI systems elevate efficiency straight. However, some of it passes through a lot of human factors – support and involvement translate tech into human performance. Indian retail gives its scene a certain aspect: here, tradition and automation live at the same checkout. That ratio makes the evidence both rooted and universally relevant.

 

Theoretical Implications

Extension of TAM

The findings in this study contribute a small but meaningful chapter to that debate: Checkmarks technology matters, but X culture determines how much. This pattern does not fly in the face of theory. The findings fit very well in TAm + OST and the synergy between them suggests that tools and trust are always better!

 

Reinforcement of OST

The diffusion is straight forward, however there is an almost invisible line to drive on. There is direct prediction of performance from the AI integration as well. When machines disappear, people are quick and distinct. Further support to the evidence of that from the Tambe et al. and the

 

Integration of SET

The diffusion goes straight but the path had a fine line. The integration of AI also directly predicts performance. People are quick and clear when the machines make them feel safe and easy. Such evidence for the Tambe et al. and the

 

Evidence from an Emerging Economy

The study shifts the dialogue from Western boardrooms to Indian retail floors. It picks up on cultural strata — hierarchy, community and family influence — that subtly mold technology use. And in the process, it magnifies HR-tech theories to be more global.

 

Conceptual Contribution - Adaptive Workforce Model (AWM)

The moderated-mediation model, in turn, results in what may be termed an Adaptive Workforce Approach. It connects the three performance engines: technology (AI integration), climate (backup) and energy (activation). Together, they describe how a digital workforce remains human and yet races along.

 

Practical Implications

Human-Centric AI Deployment

Technology projects must start with people. By combining AI with workshops, co-learning and mentorship we can help employees feel part of the change rather than making them forces of it. Familiarity beats fear.

 

Strengthening Support Systems

True support is ongoing. It consists of open communication, a leading presence and rewards for effort. As workers start to see this, they are more willing to experiment with new tools.

 

Designing for Engagement

Today, engagement requires digital oil — interactive dashboards, gamified micro-lessons or peer innovation circles. Tiny sparks like these keep curiosity alive and make AI seem collaborative rather than mechanistic.

 

Balanced Leadership

The best digital leaders mix empathy and clarity. Dwivedi et al. (2023) describe as enhanced humanism — leading with both data and warmth. When efficiency and empathy ride in the same car, adaptation speeds up.

 

Rethinking Performance Management

In AI workplaces, performance on appraisals needs to test for learning agility, teamwork and innovation not just speed. A rewarding failure sends the message to employees that improvement is as important as being right.

Tech can structure work, but it takes people to give it meaning. This work demonstrates that the connection between adept machine logic and human agency is comprised of support and engagement. For India’s retail universe — in which digital platforms expand alongside daily gut feelings — the message is simple: Keep that tech smart, but keep those people visible.

 

Limitations

Every research project has boundaries it cannot cross, and this is no exception. The first limitation is its cross-sectional nature. The study captured a frame in time — an instant in which AI, support and engagement converged. Offices change more quickly than that; a montage might have seen greater changes.

The second limitation is the sampling bias. Western Uttar Pradesh looked organized and semi-organized blended, not clear enough, but then India is large and uneven. Besides, what is true here may not apply in the same way to the South or even on the metros. From local culture to store size, even the language people speak can determine how they feel about technology.

The third limitation is in the self-reporting method. Workers' responses were based on perception, not performance records. That doesn’t mean the data are wrong — it’s just a reminder that feelings tint facts. Some may overemphasize comfort; others, obscuring frustration. Toss in some supervisor ratings or real-time metrics the next time they can find a spot for it.

There is also a slight shortcoming with patient numbers. The PLS-SEM analysis with three hundred valid cases achieved reliable results; however, more responses—especially across age or experience groups—could have revealed finer-grained patterns.

Nevertheless, the study still talks to us clearly, because it has landed in the middle of a moment many retail workers are living right now: that of learning how to remain human in a system that is learning how to think.

 

Future Research Directions

There is much more to be examined.

  1. Longitudinal follow-up – Following the same stores or employees over multiple phases would allow observing how familiarity with AI either increases or declines; and engagement is maintained or becomes further detached.
  2. Comparing regions and cultures – Repurposing this study in metro cities or other states of India could determine how context modifies the relationship between tech and trust. Cross-country comparisons could expand that lens more still.
  3. Multi-source evidence — mixing perception and HR records, or AI-usage logs — would fill in the gap between feeling and action, making it possible to know where confidence actually turns into performance.
  4. Out of retail – Trying out the model at scale in banking, education or healthcare would reveal whether the Adaptive Workforce pattern holds true or buckles under different strains.
  5. Qualitative voices – Numbers tell you what; stories tell you why. Interviews, or small ethnographic snapshots, could reveal how employees talk about AI — the qualms and stateliness and hush-hush pride that so rarely fit inside Likert scales.

Broadening the scope of Adaptive Workforce Model (AWM) -- Future research could ‘feed in’ concepts such as digital resilience, lifelong learning and ethical AI trust to keep the model current with emerging developments.

 

CONCLUSION

When we wrote this study, it felt more like recounting a conversation between people and technology than describing a series of numbers. The question was never simply does AI work? It was when and for whom it really works. For India’s retail workers, the solution appears straightforward on paper but complex in reality.

The results demonstrated that artificial intelligence can enhance performance, but only in cultures where workers feel steady support and fair treatment. Given that training is accessible, managers remain approachable, and mistakes are viewed as steps in the learning process employees will begin to trust the system. That trust translates into engagement, and an engaged person becomes a performing one. Without these human connections, even an advanced algorithm is paralyzed.

To address that gap, we grounded our research in three theoretical lenses: the Technology Acceptance Model, and Organizational Support Theory as well as Social Exchange Theory. Each one described a different corner of the same picture — how people take up tools, how they depend on their organizations, and how they reciprocate fairness with effort. The result of that mix was the Adaptive Workforce Model, a reminder as simple as it is plain that technology by itself is only half an equation: The other half involves culture.

The study’s message for managers is sort of simple: progress doesn’t start with new software, it starts with conversations that make people feel capable of using one. Support programs, peer mentors and validation of experimentation all make AI feel like a natural part of work life. For academics, the question now is to see how these relationships evolve over time: Whether today’s tentative optimism develops into lasting confidence.

Ultimately, the story we were writing was not one about machines displacing judgment but people learning to cohabit with them. The workplaces that succeed during this transition are likely to be those that recall a simple rule of progress: Technology changes swiftly, but trust changes everything.

REFERENCES
  1. Agarwal, R., & Sinha, D. (2021). Digital transformation and employee resilience in India’s service sector. Journal of Organizational Change Management, 34(8), 1121–1138.
  2. Bai, L., & Huang, Z. (2022). Machine learning adoption and workforce agility in retail enterprises. Computers in Human Behavior Reports, 8, 100193.
  3. Bansal, K., Sharma, V., & Malhotra, A. (2022). Technological readiness and employee outcomes in Indian organized retail. Asia Pacific Journal of Human Resources, 60(4), 578–595.
  4. Blau, P. M. (1964). Exchange and power in social life.
  5. Brougham, D., & Haar, J. (2021). Smart technology, artificial intelligence, robotics, and algorithms (STARA): Employees’ perceptions of the future of work. Human Resource Management Journal, 31(1), 1–15.
  6. Brougham, D., & Haar, J. (2022). Supportive leadership and AI job insecurity: Evidence from digital service firms. Personnel Review, 51(3), 843–861.
  7. Brynjolfsson, E., & McAfee, A. (2021). The business of artificial intelligence: What it can – and cannot – do for your organization. Harvard Business Review, 99(4), 56–65.
  8. Chen, Y., Li, X., & Wong, P. (2023). Perceived support and trust in automation: Exploring mediating pathways in AI adoption. Technological Forecasting and Social Change, 189, 122387.
  9. Chowdhury, S., & Patnaik, P. (2022). Organizational learning climate and AI adoption readiness. International Journal of Productivity and Performance Management, 71(7), 2835–2854.
  10. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.
  11. Das, A., Patra, R., & Singh, A. (2024). Meta-analysis of human factors in AI-HRM research. Journal of Business Research, 171, 113411.
  12. (2024). AI and the future of work: Global human capital trends report. https://www2.deloitte.com
  13. Dwivedi, Y. K., Hughes, L., & Rana, N. P. (2023). Human-centered artificial intelligence for management research. International Journal of Information Management, 69, 102646.
  14. Eisenberger, R., Huntington, R., Hutchison, S., & Sowa, D. (1986). Perceived organizational support. Journal of Applied Psychology, 71(3), 500–507.
  15. Ghosh, P., & Gupta, S. (2024). Human–AI collaboration and employee adaptability. Human Resource Management Review, 34(1), 100965.
  16. Gupta, S., & Mohan, T. (2023). Employee engagement and digital readiness: Empirical insights from emerging markets. Asia Pacific Journal of Human Resources, 61(3), 421–444.
  17. Hair, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2021). A primer on partial least squares structural equation modeling (PLS-SEM) (3rd ed.). Sage.
  18. Huang, M. H., & Rust, R. T. (2021). Artificial intelligence in service. Journal of Service Research, 24(1), 3–21.
  19. (2024). Retail industry in India – Growth and trends. India Brand Equity Foundation. https://www.ibef.org
  20. Johnson, T., & Martinez, K. (2023). Automation anxiety and job performance: The moderating role of managerial support. Computers in Human Behavior, 139, 107381.
  21. Kahn, W. A. (1990). Psychological conditions of personal engagement and disengagement at work. Academy of Management Journal, 33(4), 692–724.
  22. Kamble, S., Gunasekaran, A., & Choudhary, A. (2023). AI-enabled supply-chain transformation and human capability. Technovation, 123, 102745.
  23. Kumar, R., & Patel, A. (2024). AI transformation in Indian retail: Challenges and opportunities. International Journal of Retail and Distribution Management, 52(2), 233–251.
  24. Kumar, S., & Sharma, P. (2023). Linking digital technology adoption and employee performance in emerging markets. Journal of Asian Business and Economic Studies, 30(4), 411–429.
  25. Lee, J., & Brown, T. (2023). The moderating role of engagement in digital transformation environments. Employee Relations, 45(2), 299–316.
  26. Meijerink, J., Bondarouk, T., & Meershoek, A. (2021). When HRM meets AI: Exploring the impact of algorithmic HR on employees. International Journal of Human Resource Management, 32(12), 2622–2650.
  27. Ng, S., & Miao, Q. (2023). Organizational support and adaptive learning in AI-driven workplaces. Journal of Knowledge Management, 27(4), 953–971.
  28. Nguyen, H., & Rahman, Z. (2023). AI integration, skill enrichment, and job performance in retail operations. Service Business, 17(3), 551–570.
  29. Pillai, K., Thomas, J., & Kumar, A. (2023). Managerial empathy and employee learning during AI transformation. Human Resource Development International, 26(2), 121–138.
  30. Prasad, S., Kamble, S., & Choudhary, A. (2024). Technostress and performance outcomes in AI-enabled organizations. Technology in Society, 79, 102293.
  31. Rahul, S., & Mehta, V. (2022). Digital support mechanisms and job performance in technology-driven service organizations. Information Technology & People, 35(5), 1483–1503.
  32. Rana, N. P., Luthra, S., & Dwivedi, Y. K. (2022). Unpacking AI-enabled workplace transformations: Implications for human capital. Technological Forecasting & Social Change, 180, 121657.
  33. Saks, A. M. (2021). A dual-path model of employee engagement and performance. Human Resource Development Quarterly, 32(2), 189–210.
  34. Santos, V., & Jain, P. (2024). Trust in AI and organizational climate in emerging markets. Journal of Business Research, 171, 113412.
  35. Sarker, M., Islam, M., & Rahman, S. (2023). Technology stressors and job performance: Evidence from AI adoption in retail. Information & Management, 60(8), 103846.
  36. Schaufeli, W. B. (2017). Applying the job demands–resources model to burnout and engagement: A review. Organizational Dynamics, 46(2), 120–132.
  37. Sharma, P., & Kohli, R. (2024). Perceived organizational support and ease of AI use: A study of service employees. Computers in Human Behavior Reports, 10, 101101.
  38. Singh, M., & Joshi, P. (2024). Digital HR transformation and psychological empowerment: Evidence from Indian enterprises. Management Decision, 62(5), 1098–1115.
  39. Srinivasan, A., & Raj, K. (2024). Artificial intelligence integration and performance metrics in Indian retail. International Journal of Retail and Distribution Management, 52(5), 623–642.
  40. Tambe, P., Hitt, L., & Rock, D. (2020). Artificial intelligence and the future of work: Evidence from online job postings. Management Science, 66(7), 2867–2881.
  41. Tripathi, N., Shukla, R., & Kumar, A. (2025). Digital training, engagement, and adaptive performance: A moderated mediation analysis. European Management Journal, 43(1), 52–66.
  42. Tursunbayeva, A., & Gagné, M. (2024). Employee engagement in the age of automation: The role of intrinsic motivation. Computers in Human Behavior, 151, 107097.
  43. Yadav, A. (2023). A systematic review of human–AI collaboration research in management. Technological Forecasting & Social Change, 189, 122381.
Recommended Articles
Research Article
Green HRM Policies and Practices to Promote Good Health and Well‑Being in The Employees’ Community: With Reference to The Corporate Sector of Andhra Pradesh
...
Published: 24/12/2025
Research Article
From Traditional Reporting to Integrated Reporting: A Path to Sustainable Value Creation
Published: 24/12/2025
Research Article
Impact of Loan Burden on Small Businesses in Semi-Urban Areas: A Study of Credit Regulation and Borrower Protection Frameworks
...
Published: 23/12/2025
Research Article
Impact of Artificial Intelligence in the Recruitment Process within the IT Industry: A Case Study in Bangalore City
Published: 23/12/2025
Loading Image...
Volume:6, Issue:1
Citations
10 Views
2 Downloads
Share this article
© Copyright Journal of International Commercial Law and Technology