A common concern we have heard among the senior leaders of early or mid-stage companies is how to safeguard against leaking sensitive PII data while allowing their employees to use LLM models and other 3rd party AI tools. According to a survey, 71% of Senior IT leaders hesitate to adopt Generative AI due to security and privacy risks.
What is the risk?
Many business leaders are unaware of the risk associated with the unregulated usaged of AI tools. So let us first understand the risk.
In March 2023, Economist reported 3 separate data leakage incidents by Samsung employees just after the company allowed their employees to use ChatGPT. OpenAI in its FAQ shares that any information shared by users can be used for re-raining future models. Open AI also states in its ChatGPT usage guide ‘Do not enter sensitive information.’
This is not the only incident. There have been a growing number of concerns and incidents related to data security and privacy with large language models (LLMs) like ChatGPT. Here are a couple of cases that illustrate similar issues:
1. Doctor-Patient Data Breach with Medical LLM: A news report (source might be difficult to find due to privacy concerns) highlighted a potential data breach involving a medical LLM used in a healthcare setting. A doctor reportedly used the LLM to analyze patient data and generate reports. There are concerns that the doctor might have inadvertently included identifiable patient information in the queries submitted to the LLM.
2. AI Bias in Hiring Decisions: This isn’t a data leak, but it demonstrates a potential risk associated with using LLMs in tasks involving sensitive information. There have been reports of AI recruiting tools using biased language models, leading to discriminatory hiring practices. The language models might pick up on subtle biases present in the training data, leading to unfair evaluation of candidates. While not directly a data leak, it showcases how LLMs can perpetuate biases or make discriminatory decisions based on the data they are trained on. This is a concern when using LLMs for tasks involving sensitive information like job applications or loan approvals.
Overall:
These incidents highlight the evolving landscape of data security and privacy in the age of LLMs. Here are some key takeaways:
As LLMs become more integrated into our lives, addressing these concerns will be crucial for ensuring responsible and ethical use of this powerful technology.
How to manage the risk? To avoid this, some organizations have restricted their employees from using AI tools. This, in my opinion, is more harmful than helpful. Instead what we need is to evaluate the specific needs of an organization, the kind of information/task that these tools are helping achieve, and explore some ways in which a secured and controlled environment can be created for the safe use of AI tools. This approach ensures that the benefits of AI are harnessed without compromising security or confidentiality. It involves implementing robust security protocols, continuous monitoring, and regular training for employees on best practices and potential risks associated with AI tool usage. Below we will discuss some common methodologies:
Create a robust AI Policy for your organization: This is where you should start. The AI Policy can be in addition to Third party / Open Source policy that the organization has. This process to creating an AI policy can be broken down into the following tasks:
While these policies are a good first start, one may have to look at 3rd party tools and Techniques that can provide additional guardrails.
These techniques may involve:
Data Minimization and Sandboxing:
Data anonymization, Data Redaction, and Data Pseudonymization:
User Training and Awareness:
Model Training and Development:
Additional Techniques:
Some 3rd party tools that are worth mentioning: Strac, OpaquePrompt from Langchain, Presidio by Microsoft, LLM Guard among others.
If you are not sure how to approach this at your organization, feel free to reach out to us at Growthclap.
As a photographer, it’s important to get the visuals right while establishing your online presence. Having a unique and professional portfolio will make you stand out to potential clients. The only problem? Most website builders out there offer cookie-cutter options — making lots of portfolios look the same.
That’s where a platform like Webflow comes to play. With Webflow you can either design and build a website from the ground up (without writing code) or start with a template that you can customize every aspect of. From unique animations and interactions to web app-like features, you have the opportunity to make your photography portfolio site stand out from the rest.
So, we put together a few photography portfolio websites that you can use yourself — whether you want to keep them the way they are or completely customize them to your liking.
Here are 12 photography portfolio templates you can use with Webflow to create your own personal platform for showing off your work.
Subscribe to our newsletter to receive our latest blogs, recommended digital courses, and more to unlock growth Mindset