Hyponema

Trust

Subprocessors

Hyponema uses the subprocessors below to host, secure, operate, analyze, bill, communicate, and run customer-configured voice sessions.

Last updated: May 10, 2026

Legal entity

Commercial brand
Hyponema
Legal name
ENTROPY BAY, S.L.
Tax ID
B26671842
Registered office
Madrid, Spain
Data protection
[email protected]

Infrastructure and hosting

  • Google Cloud Platform: cloud hosting, database, networking, key management, secrets, logs, and managed AI services where configured. Region: primarily United States.
  • Cloudflare: DNS, WAF, edge delivery, and static landing deployment. Region: global.
  • Amazon Web Services: optional audio object storage when configured for deployments that use S3-compatible storage. Region: deployment-specific.

Product operations

  • Stripe: billing, invoices, checkout, tax, and payment processing.
  • Resend: transactional email, waitlist, login, and product email delivery.
  • PostHog: website and product analytics, feature usage, and conversion measurement.
  • Pydantic Logfire or OpenTelemetry endpoints: optional observability and trace export where configured.

Telephony and messaging

  • Telnyx: phone numbers, inbound and outbound calls, SMS, and media streaming when enabled by the customer.
  • Twilio: phone numbers, inbound and outbound calls, SMS, and media streaming when enabled by the customer.
  • Firebase Cloud Messaging: push notifications where enabled by the customer.

Customer-configured AI providers

These providers receive customer content only when selected by a workspace, configured through provider keys, or used by a customer-requested workflow.

  • Speech providers: Deepgram, AssemblyAI, OpenAI Whisper, Google Cloud Speech-to-Text, Groq, and other configured speech endpoints.
  • Model providers: OpenAI, Anthropic, Google Gemini or Vertex AI, Mistral, OpenRouter, Groq, custom OpenAI-compatible endpoints, and other configured model endpoints.
  • Voice providers: Cartesia, ElevenLabs, OpenAI, Deepgram, Google Cloud Text-to-Speech, and other configured voice endpoints.
  • Embedding providers: Google Gemini or Vertex AI by default, with optional OpenAI, Voyage, Cohere, or compatible providers where configured.

Subprocessor updates

We update this page when material subprocessors change. Customers with a DPA may object to a material new production subprocessor on reasonable data protection grounds by contacting [email protected].

Need a signed agreement, DPA, vendor questionnaire, or procurement review? Contact [email protected].