Data Collection: What Do We Gather and How Is It Used?
The scope of data collection depends on the definition of the "customer" within our platform. We distinguish between two key user groups:
Platform Users: To access our platform, users are only required to provide their email address, first name, and last name. Authentication is managed using Auth0 authentication standards to ensure security and compliance.
End Users: No personally identifiable customer data (PII) is collected from end users for analytics or CRM purposes. Instead, we focus on collecting data from LLM models to improve our services and insights.
Data Storage and Security Measures
We prioritize data security by storing all collected data in data centers within the European Union. These data centers are protected using multiple layers of security:
Restricted Access: Data is not exposed to the Internet and is only accessible through private networks and an encrypted VPN. The principle of least privilege access is enforced, ensuring only the relevant development team can access the data.
Encryption Standards: Data is securely transmitted via APIs using TLS 1.3 encryption. Access is safeguarded by authentication mechanisms (short-lived JWTs) and fine-grained role-based access control (RBAC), ensuring users only access the data they are authorized to view.
Web Application Firewall (WAF): API endpoints are protected using a WAF, adding another layer of defense against potential threats.
AI Model Providers and Updates
Our platform integrates with the APIs of the world's leading LLM providers, ensuring continuous updates and improvements. As of February 18, our platform supports:
OpenAI
Gemini (latest version upgrade: Gemini 1.5 to Gemini 2.0)
Llama
Anthropic
Deepseek
Perplexity
We regularly assess and update our model offerings to ensure our users benefit from the latest advancements in AI.
Here's an up-to-date list of all the models used.
Security and Compliance in API Connections
To maintain a high level of security and compliance in our connections to AI model providers, we implement robust protection measures:
Encryption Protocols: All data in transit is secured using TLS 1.2/1.3 to ensure confidentiality and integrity.
Strict Authentication Mechanisms: We leverage OAuth 2.0 and API keys to connect with AI model providers, ensuring secure and controlled access.
Note: All connections to the LLM providers are encrypted and secure.
Certifications and Security Assessments
We uphold rigorous security standards through continuous assessments:
ISO 27001 Compliance: Our platform adheres to ISO 27001 standards, ensuring best practices in information security.
SOC 2 Certification (Type 1): We are SOC 2 Type 1 compliant, and we are actively working towards achieving SOC 2 Type 2.
Weekly Penetration Testing: We conduct security penetration tests using Acunetix every week to identify and address vulnerabilities proactively.
Data Privacy and Retention for the Share of Model Module
At Jellyfish, we prioritize the privacy and security of our users' data—especially when it comes to advanced AI integrations like the Share of Model.
For Share of Model, no client data is stored on our platform, with the exception of essential user metadata such as email address, first name, and last name. All other information used in the analysis process originates from publicly available data, specifically the output returned by the large language models (LLMs) during analysis.
Users retain full control over which AI models are used: the system supports case-by-case selection or exclusion of specific LLMs when initiating an analysis. This ensures transparency and flexibility in managing data flow through third-party providers.
Importantly, no confidential or proprietary client data is ever shared with third-party model providers. We have established contracts with all LLM providers that enforce strict non-retention clauses. Most of our partners guarantee zero data retention; in cases where minimal retention is necessary, the maximum duration is strictly limited to 30 days.
Anthropic and OpenAI (ChatGPT) retain data for up to 30 days
Meta’s LLAMA and Google Gemini enforce zero data retention
Additionally, all providers are contractually bound to not use any data from our platform for training their models.
These safeguards ensure that organizations leveraging the Share of Model feature can confidently adopt AI-powered analysis without compromising data security or control.
Data Privacy and Retention for the Asset Evaluation Module
At Jellyfish, we place the highest importance on the privacy and security of our users’ data—especially when handling sensitive assets like images, videos, and user-submitted text through the Asset Evaluation module.
Unlike the Share of Model module, Asset Evaluation involves the storage of client-submitted content. This includes media files (images and videos) and textual inputs provided by users during the evaluation process.
All stored data is treated as strictly confidential and is automatically and permanently deleted in the following cases:
At the termination of a contract, all associated data is fully and irreversibly removed from our systems.
At any time, users can delete specific content directly from the interface. This triggers immediate and permanent deletion of the selected assets.
We never share any client-submitted content with third parties, and no data is used for training AI models. Our infrastructure and access controls are designed to ensure that only authorized users within your organization can access stored assets.
All data is stored with a cloud provider certified under ISO/IEC 27001, the leading international standard for information security management. This guarantees that data storage and access controls meet the highest standards of confidentiality, integrity, and availability.
These guarantees allow organizations to confidently use Asset Evaluation while maintaining full control over their data, ensuring both compliance and peace of mind.