Responsible AI at Cresta

Elevating innovation with ethical intelligence

At Cresta, we are dedicated to pioneering a future where artificial intelligence not only drives transformative business outcomes, but does so ethically and safely. Our approach to responsible AI is not just a commitment to technology; it’s a commitment to the trust, transparency, and ethical use of AI in shaping a better, more human-centric future.

How we at Cresta define responsible AI

We have identified four core areas that we believe should guide AI development and use, forming the foundation of how we innovate responsibly.

Fairness

Cresta meticulously balances fairness at the model level by leveraging diverse datasets, customizing models with proprietary customer data, and mitigating biases through robust monitoring. At the application level, Cresta fosters inclusivity by reinforcing context-specific best practices and minimizing human bias in performance management, empowering agents with diverse skills and backgrounds to be successful.

Privacy & ethics

Cresta upholds the highest standards of privacy and ethics by focusing on enhancing agent performance and customer experience without compromising personal data. We do not use any protected explicit signals, such as metadata related to medical or financial attributes, in the process of training our models. We use industry-leading personally identifiable information (PII) redaction algorithms, as well as Payment Card Industry Data Security Standard (PCI-DSS) and
ISO27701 compliance.

Transparency

Transparency is at the core of Cresta’s responsible AI approach. Weinvest heavily in R&D designed to make it easier for humans to interpret and understand the outputs of our AI models, including cutting-edge techniques like Chain-of-Thought Reasoning (CoT) and Model-based Critique. We’ve developed a variety of internal tools designed to observe and benchmark our models, and will continue devoting resources and roadmap to exposing more of these directly to customers in ways that can add value and increase trust.

Quality optimization & risk management

Our commitment to quality optimization and risk mitigation is evident throughout our development lifecycle. Our AI models are fine-tuned for context and use case. We do extensive post-processing, leveraging techniques such as filter models, re-ranking and self-critique to double-check the outputs of ourLLMs. By continuously monitoring usage trends, implementing feedback loops, and employing risk mitigation strategies, Cresta ensures high-quality, ethical AI that evolves alongside our customers’ needs.

Deploying AI responsibly

We collaborate closely with our customers, deploying generative AI solutions in a phased approach to ensure secure and responsible implementation at scale. This journey culminates in the establishment of a Center of Excellence, where our users evolve into AI experts. In this centralized hub, best practices, decision-making, and operational efficiency converge as our users continually refine the quality of models. Empowered with Cresta Opera, our experts orchestrate these models, placing the transformative capabilities of AI directly into the hands of those who understand it best—our users.

figure 1

Other related content

Webinars

Navigating responsible AI: A deep dive into Cresta’s approach

Discover Cresta's commitment to responsible AI practices shaping the future of customer interactions in leading...

Datasheets Library

PII Redaction | Cresta Chat and Voice

In this document, we outline how Cresta handles sensitive personally identifiable information (PII) in a...

Blog

Responsible AI: Cresta’s approach

Read on for an introduction to Cresta's approach to responsible AI, and the four key...