Easy-to-Build, Easy-to-Expose: How Vibe Coding Is Creating New Data Risks
Key Points
- RedAccess researchers found that roughly 5,000 vibe-coded applications leaked corporate data online due to misconfigured privacy settings.
- Researchers were able to track these applications through browser searchers that mixed keywords and AI companies’ platform domains.
- IANS Faculty recommend that organizations monitor public exposure, inventory AI-built apps and automate security checks in.
Easy-to-Build, Easy-to-Expose: How Vibe Coding Is Creating New Data Risks
On May 7, cybersecurity firm RedAccess reported that it found misconfigured or default public privacy settings in roughly 5,000 vibe-coded applications.
Around 40% of the applications -- built in Lovable, Base44, Replit or Netlify -- had “virtually no security or authentication of any kind" and exposed sensitive data like hospital schedules, go-to-market presentations and sales records.
In some cases, anyone with the correct URL could access the information. Other applications had "trivial" authentication methods, including entering an email address.
The use of AI-assisted coding tools has taken off as they allow anyone to build software without engineering skills. However, that lack of oversight and cybersecurity training has led to companies accidentally publishing confidential data.
"The crucial point is that these platforms not only allow the vibe coder to create an application, but host the applications as well, replacing cloud and other typical web hosting platforms. The full stack exists in one place: from idea to production app connected to a proper domain name.” Adrian Sanabria, IANS Faculty.
Big Picture
AI vibe coding anyone to create an application, increasing the likelihood of data exposure through user error and a lack of safeguards. By developing enterprise applications outside the traditional processes, the risk of data exposure is amplified.
"The entire idea behind vibe coding is that you have people who either don't know what they're doing or are lazy and don't care about what they're doing. Of course there are going to be issues with leaking sensitive data. I would also expect issues around authentication, vulnerable components, and a host of other issues.” Josh More, IANS Faculty.
If an organization allows vibe coding, security teams need to manage the risk by enforcing guardrails by default, inventorying AI-built apps, monitoring public exposure and assuming employees will deploy applications without understanding the authentication or privacy settings.
"The good news: if RedAccess can find all these exposed vibe-coded apps and connect them to businesses, so can you! We've been through this already with Shadow IT and engineers creating cloud accounts without notifying anyone or getting approval.” Adrian Sanabria, IANS Faculty.
IANS Faculty Recommendations
- Assume vibe coded apps are public by default: Security teams should assume any vibe coded apps are public by default and check the app's security settings.
- Assume every internal vibe-coded prototype is reachable from Google: Index protection, robots.txt, and content security policy are not optional for AI-generated apps. Treat them as production from minute one.
- Add data lineage discipline into your Shadow AI tools: Shadow AI tooling needs the same data lineage discipline as any production app. Microsoft's 2025 research puts 40% of data security incidents inside AI applications. IBM tagged a $670K shadow AI tax on breach cost. The data was always somewhere; the platform just made it easier to publish.
Authors & Contributors
Nuria Diaz Munoz, Author, IANS News
Josh More, IANS Faculty
Adrian Sanabria, IANS Faculty
Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our News & blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.