Join us for an evening designed for engineers, AI practitioners, and tech innovators eager to dive deep into the rapidly evolving landscape of AI hardware and large language models (LLMs). This is your chance to gain expert insights, engage in a fireside chat, and connect with Melbourne’s vibrant tech community.
AI is no longer a lab project, it is embedded in customer support, digital workflows and enterprise decision making systems. As organisations scale AI into real-world environments, inference has emerged a critical layer of the AI stack.
This session will explore why inference is becoming the strategic battleground in enterprise AI and how purpose-built chips are redefining production systems.
Â
The discussion will cover the following:
Â
AI Production Inflection Point
Inference as the Competitive Advantage
Why Architecture Matters: Silicon to API
Determinism & Real-Time AI
The Economics of Inference at Scale
Selecting LLMs & Infrastructure Stack
Designing Production-Grade AI Systems
The Future of AI Acceleration
Enterprise Sales Leader, Groq
Representing Groq, Nidal will share insights into how the industry is rethinking AI compute and why deterministic, purpose-built inference architecture is reshaping performance at scale.
Groq was designed from the ground up for AI inference. Unlike general-purpose GPU architectures originally optimised for training and graphics workloads, Groq’s Language Processing Unit (LPU) is purpose-built to deliver deterministic execution, predictability, linear scaling performance and high throughput with consistent response times.Â