hero

BUSINESS IS HUMAN:

Volition Capital is dedicated to helping our portfolio companies hire the best and brightest people. Take a look through the many job opportunities in our network.

Grow with Volition.
companies
Jobs

Senior Applied AI Scientist - Generative AI & Agentic Systems

Messagepoint

Messagepoint

Software Engineering, Data Science
Toronto, ON, Canada
Posted on Oct 7, 2025

About Messagepoint

Messagepoint is a privately-owned software company providing an AI-powered SaaS solution that enables enterprises to manage and personalize customer communications with greater speed, accuracy, and control. Our award-winning platform empowers business users to create and optimize compliant, personalized communications across digital and print channels using advanced content intelligence and large language models.

Messagepoint is headquartered in Toronto with employees located throughout Canada, the United States and the UK.

About the Role

We are seeking a Senior Applied AI Scientist to join our team building the MARCIE AI Platform, where you'll design, build, and ship production-grade generative AI solutions using state-of-the-art large language models (LLMs) and multi-agent systems. You'll have the ability to transform the latest research into production systems, implementing proven AI techniques while staying current with emerging advances to inform your engineering decisions. You'll translate relevant research breakthroughs into scalable production systems that solve real enterprise challenges.

About the MARCIE AI Platform

MARCIE is an enterprise AI platform focused on Customer Communication Management (CCM) across industry verticals. We apply generative and agentic AI to document processing, content understanding, and semantic analysis, including NLP and vision/object processing tasks. You'll build production systems that handle large volumes of multi-modal, multi-lingual customer communications with measurable improvements in accuracy, speed, and safety.

Our Team & Work Environment

You'll join an AI-first engineering team that uses AI-assisted development to ship production systems. We're looking for a strong engineer who stays current with AI research and can implement proven techniques in production. While you'll need to understand research papers and evaluate new approaches, your primary focus will be building robust, scalable systems that deliver real value to our enterprise customers.

What You'll Do

Core Engineering Responsibilities

  • Design and deploy production multi-agent AI systems using frameworks like LangGraph, OpenAI Assistants/Responses, Swarm, CrewAI, or AutoGen to solve complex business problems at scale
  • Build production RAG pipelines implementing proven retrieval techniques including hybrid search, reranking, and knowledge graph integration for enterprise-grade performance
  • Engineer and optimize prompts as versioned code using established techniques (Chain-of-Thought, ReAct, self-consistency) with rigorous testing and evaluation
  • Implement comprehensive evaluation and monitoring pipelines with production KPIs, quality gates in CI/CD, and automated regression testing
  • Architect scalable agentic workflows with persistent state management, error recovery, and production-grade reliability
  • Deploy observability and debugging infrastructure using LangSmith, TruLens, DeepEval for production monitoring and issue resolution
  • Integrate safety and compliance measures including PII detection, content filtering, guardrails, and security best practices
  • Optimize system performance and costs through caching strategies, model routing, batch processing, and efficient resource utilization
  • Build APIs and microservices for AI systems with proper versioning, documentation, and SLA management
  • Champion AI-assisted development using tools like Claude Code, Augment, and Gemini CLI to maximize engineering productivity

Research & Innovation

  • Stay current with relevant AI research - Read papers from top conferences (NeurIPS, ICML, ACL) to identify techniques applicable to our use cases

  • Evaluate emerging techniques - Assess new methods for potential adoption, focusing on production viability and ROI

  • Implement proven research concepts - Translate well-validated research into production when it provides clear value

  • Conduct targeted experiments - Test specific hypotheses about system improvements with data-driven validation

Research-Informed Engineering

As a Senior Applied AI Scientist, you'll leverage research insights to build better production systems:

  • Evaluate proven techniques from research papers for practical implementation in our systems

  • Build targeted POCs when evaluating significant new approaches with clear success criteria

  • Make data-driven decisions about adopting new techniques based on production metrics

  • Share knowledge with the team about relevant research advances and their practical applications

Required Qualifications

Experience

  • 5+ years of software engineering experience with at least 2 years shipping LLM-powered products to production

  • MS in Computer Science or equivalent experience (PhD is a plus but not required) with strong foundation in deep learning, NLP, or machine learning

  • Solid understanding of transformer architectures, attention mechanisms, and modern deep learning techniques

  • Ability to read and understand research papers to stay current with the field and identify useful techniques

Essential Production Experience

  • Proven track record shipping LLM applications with real users, SLAs, and production monitoring

  • Production deployment of multi-agent systems using frameworks like LangGraph, OpenAI Assistants, or similar

  • Experience building RAG pipelines at scale with vector databases, retrieval optimization, and quality metrics

  • Expert-level prompt engineering with systematic testing and optimization approaches

  • Building APIs and microservices for AI systems with proper error handling and scalability

  • Distributed computing and storage experience (e.g., Apache Spark, Ray, cloud storage solutions)

  • Strong software engineering practices including code review, testing, CI/CD, and documentation

  • Active use of AI-assisted development tools like Claude Code, Augment, Gemini CLI to maximize productivity

  • Experience with production ML/AI infrastructure including monitoring, debugging, and performance optimization

Technical Skills

  • Python expertise with async programming and modern web frameworks (FastAPI, Django)

  • Vector database experience (pgvector, Pinecone, Weaviate, Milvus, or FAISS)

  • LLMOps/AgentOps tools familiarity (LangSmith, TruLens, DeepEval, Giskard)

  • SQL and search systems experience (PostgreSQL, MySQL, Elasticsearch, Solr, OpenSearch)

  • Cloud platform proficiency (AWS, GCP, or Azure) with containerization and orchestration

  • API design and microservices architecture for AI systems

  • Version control and collaborative development practices with Git

Preferred Qualifications

Engineering Excellence

  • Experience with multimodal AI systems - Production deployment of vision-language models, document understanding pipelines

  • GraphRAG implementations - Building knowledge graph-enhanced retrieval systems at scale

  • Model Context Protocol (MCP) - Experience implementing MCP servers and integrations

  • Performance optimization expertise - Quantization, caching strategies, model routing, batch processing

  • Fine-tuning and adaptation - Parameter-efficient methods (LoRA, QLoRA) for production use cases
  • Safety and guardrails implementation - Production deployment of content filtering, PII detection, jailbreak prevention

  • Advanced infrastructure experience - Kubernetes, distributed training, model serving platforms

  • Experience with frameworks like DSPy, LlamaIndex, Semantic Kernel for building production systems

Research Awareness

  • Understanding of latest research trends and ability to evaluate their practical applications

  • Experience implementing techniques from research papers in production settings

  • Participation in AI conferences or workshops (as attendee or presenter)

  • Open source contributions to AI/ML projects

  • Publications or blog posts about practical AI implementations (not required)

  • Working knowledge of open-source models via Hugging Face for NLP and image-based tasks

Potential Impact in First 90 Days

  • Ship a production LangGraph multi-agent system with comprehensive monitoring, error handling, and performance optimization

  • Deploy RAG pipeline improvements that reduce latency by 30% and improve retrieval accuracy through proven techniques

  • Implement evaluation and monitoring infrastructure with automated quality gates and regression testing in CI/CD

  • Optimize existing systems for cost and performance, achieving measurable improvements in token usage and response time

  • Stay current with research - Evaluate 2-3 relevant papers and implement one proven technique that improves system performance

  • Establish team knowledge sharing - Present findings on practical applications of recent research to the engineering team

Our Tech Stack

  • LLM Providers: OpenAI (primary), Gemini, AWS Bedrock - emphasizing flexibility and best-in-class selection for each use case

  • Agent Frameworks: LangGraph, OpenAI Assistants/Responses, CrewAI, Swarm, AutoGen

  • Vector/Graph Databases: pgvector, Pinecone, Milvus, Weaviate, Neo4j; Elasticsearch/OpenSearch for hybrid search

  • AI Development Tools: Claude Code, Augment, Gemini CLI, GitHub Copilot

  • Observability: LangSmith, TruLens, DeepEval, Giskard, OpenTelemetry, custom evaluation pipelines

  • Guardrails & Safety: NVIDIA NeMo Guardrails, AWS Bedrock Guardrails, LlamaGuard

  • Cloud Infrastructure: AWS (primary), with multi-cloud capabilities

  • Languages: Python (primary), TypeScript/JavaScript for full-stack integration

  • Integrations: Model Context Protocol (MCP), LangServe, FastAPI

Why Join Us

  • Engineering Impact: Build production AI systems that serve millions of customer communications daily

  • Technical Innovation: Work with cutting-edge LLM technologies and implement the latest proven techniques

  • Learning & Growth: Stay current with AI research while building practical engineering skills

  • Career Development: Clear path to Staff Engineer, Principal Engineer, or technical leadership roles

  • Team Excellence: Work with talented engineers who are passionate about both building great systems and understanding the science behind them

  • Resources: Access to latest models, robust compute infrastructure, and top-tier development tools

  • Knowledge Sharing: Regular tech talks, paper discussions (optional), and engineering best practices sessions

  • Conference Attendance: Support for attending major AI conferences to stay current with the field

  • Work-Life Balance: Focus on sustainable engineering practices and healthy team dynamics

Interview Process

  1. Main Interview (90 mins) - Meet with the team to discuss your experience, system design approach, and problem-solving skills in building production AI systems

  2. Hands-on Test (in person) - Practical assessment of your ability to build and optimize AI system components

Interview Signals We Look For

  • Production Engineering Excellence: Strong system design skills, code quality, and engineering best practices

  • LLM System Experience: Can discuss production challenges, optimizations, and trade-offs in LLM applications

  • Research Understanding: Ability to read and understand papers, evaluate techniques for production viability

  • Problem-Solving: Practical approach to solving real-world problems with appropriate technical solutions

  • Technical Communication: Can explain complex systems clearly and discuss trade-offs effectively

  • AI Tool Proficiency: Demonstrates effective use of AI-assisted development tools

  • Learning Mindset: Shows curiosity about new techniques while maintaining pragmatic engineering focus

  • Team Collaboration: Experience working in cross-functional teams and sharing knowledge

Messagepoint is an Equal Opportunity Employer and encourages diversity and inclusion in the workplace.

We thank you for your interest, however, only those who qualify for an interview will be contacted.