Publications

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

people standing in front of a screen with images and a chipboard

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
1 - 15 of 10822 publications
    Productionizing Quantum Mass Production
    Bill Huggins
    Nathan Wiebe
    arXiv for now (2026) (to appear)
    Preview abstract For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step. View details
    FreshBrew: A Benchmark for Evaluating AI Agents on Java Code Migration
    Diganta Misra
    Yanqi Luo
    Anjali Sridhar
    Justine Gehring
    Silvio Soares Ribeiro Junior
    2026
    Preview abstract AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization. View details
    Preview abstract We revisit the fundamental question of formally defining what constitutes a reconstruction attack. While often clear from the context, our exploration reveals that a precise definition is much more nuanced than it appears, to the extent that a single all-encompassing definition may not exist. Thus, we employ a different strategy and aim to "sandwich" the concept of reconstruction attacks by addressing two complementing questions: (i) What conditions guarantee that a given system is protected against such attacks? (ii) Under what circumstances does a given attack clearly indicate that a system is not protected? More specifically, * We introduce a new definitional paradigm -- Narcissus Resiliency -- to formulate a security definition for protection against reconstruction attacks. This paradigm has a self-referential nature that enables it to circumvent shortcomings of previously studied notions of security. Furthermore, as a side-effect, we demonstrate that Narcissus resiliency captures as special cases multiple well-studied concepts including differential privacy and other security notions of one-way functions and encryption schemes. * We formulate a link between reconstruction attacks and Kolmogorov complexity. This allows us to put forward a criterion for evaluating when such attacks are convincingly successful. View details
    Preview abstract In January 2025, over forty Aboriginal and Torres Strait Islander researchers, practitioners, community members, and allies, gathered at the Centre for Global Indigenous Futures at the Wallumattagal Campus of Macquarie University in Sydney to envisage Aboriginal and Torres Strait Islander AI futures. This publication reports on attendees' vision for the future of AI for Aboriginal and Torres Strait Islander people. View details
    Correlated Error Bursts in a Gap-Engineered Superconducting Qubit Array
    John Mark Kreikebaum
    Leigh Martin
    Vlad Kurilovich
    Wojtek Mruczkiewicz
    Lara Faoro
    Gabrielle Roberts
    Alex Opremcak
    Alec Eickbusch
    Igor Aleiner
    Yu Chen
    arXiv (2025)
    Preview abstract One of the roadblocks towards the implementation of a fault-tolerant superconducting quantum processor is impacts of ionizing radiation with the qubit substrate. Such impacts temporarily elevate the density of quasiparticles (QPs) across the device, leading to correlated qubit error bursts. The most damaging errors – T1 errors – stem from QP tunneling across the qubit Josephson junctions (JJs). Recently, we demonstrated that this type of error can be strongly suppressed by engineering the profile of superconducting gap at the JJs in a way that prevents QP tunneling. In this work, we identify a new type of correlated error that persists in the presence of gap engineering. We observe that impacts shift the frequencies of the affected qubits, and thus lead to correlated phase errors. The frequency shifts are systematically negative, reach values up to 3 MHz, and last for ~1 ms. We provide evidence that the shifts originate from the QP-qubit interactions in the JJ region. Further, we experimentally demonstrate these correlated phase errors are detrimental to the performance of quantum error correction protocols. View details
    Software development is a team sport
    Jie Chen
    Alison Chang
    Rayven Plaza
    Marie Huber
    Claire Taylor
    IEEE Software (2025)
    Preview abstract In this article, we describe our human-centered research focused on understanding the role of collaboration and teamwork in productive software development. We describe creation of a logs-based metric to identify collaboration through observable events and a survey-based multi-item scale to assess team functioning. View details
    Consideration on CMAS arriving as discrete particles
    Eric H. Jordan
    Stephen Jordan
    Hiram Diaz
    Byung-gun Jun
    (2025)
    Preview abstract Turbine contaminants known as CMAS mostly arrive as individual particles in a range of mineral compositions to turbine hot sections where they are deposited and within a small area can be treated as arriving at random locations as splats. By the time the particles reach the hot section the particle size maximum is believed to be 10 microns. Using a simplified heat transfer analysis suggests the arriving temperature will be the turbine inlet temperature. Using AFRL03 as a representative set of possible minerals, for most turbine inlet temperatures a mixture of melted and un-melted particles will arrive. There are 31 combinations of the 5 minerals of AFRL03 presenting a wide range of melting points experimentally investigated in this paper. As expected, combinations generally melt at lower temperatures than the highest melting mineral in each combination. The progression of conditions starting with the arrival of isolated individual minerals is modeled using monte carlo simulations and known materials from percolation theory. This allows understanding of the development of coverage fraction and potential for mineral mixing important to melt behavior as a function of normalized CMAS dose. Using the normalized CMAS dose it is also possible to comment on the likely relative fraction of coating life during which less than fully homogenized CMAS dominates behavior. It is noteworthy that 4 out of 5 minerals and 4 mineral combinations lack either calcium or silicon or both and also melt below 1300°C. Interaction in the early deposition stage involves non CMAS like chemistries. View details
    Synthetic Dialogue Generation for Interactive Conversational Elicitation & Recommendation (ICER)
    Moonkyung Ryu
    Mohammad Ghavamzadeh
    GENNEXT@SIGIR’25: The 1st Workshop on Next Generation of IR and Recommender Systems with Language Agents, Generative Models, and Conversational AI (2025)
    Preview abstract Large language models (LLM), despite their success in conducting natural conversations and solving various challenging NLP tasks, may not aim to understand and solicit a user’s preferences and suggest recommendations using the learned user preferences through the interactions. One primary challenge for pursuing this task is the lack of rich conversational recommendation data sets (of user/agent dialog conversations) that contain diverse preference elicitation scenarios on top of item recommendations. While several standard conversational recommender datasets do contain dialogs that the agents query users for preferences, they are often restricted to the use of particular key-phrases that limit users’ preference expressions w.r.t. particular item features (e.g., location, price, rating, etc). On the other hand, hiring human raters to create a personalized conversational recommender dataset that consists of rich preference queries (with the use of various abstract preference-describing attributes) and recommendations is a very expensive, error-prone, and time-consuming process because it requires data collection on numerous personalized recommendation scenarios, where each of them may involve exponential possibilities of preference elicitation interactions (with many different queries). We propose a synthetic data generation methodology that utilizes a synthetic recommendation dialog simulator that generates templatized dialogs and uses Gemini Ultra LLM to in-paint the templatized dialogs so that the dialogs become linguistically more natural. The simulator generates recommendation dialogs that align to the user preferences that are represented by embeddings. View details
    Triply efficient shadow tomography
    Robbie King
    David Gosset
    PRX Quantum, 6 (2025), pp. 010336
    Preview abstract Given copies of a quantum state $\rho$, a shadow tomography protocol aims to learn all expectation values from a fixed set of observables, to within a given precision $\epsilon$. We say that a shadow tomography protocol is \textit{triply efficient} if it is sample- and time-efficient, and only employs measurements that entangle a constant number of copies of $\rho$ at a time. The classical shadows protocol based on random single-copy measurements is triply efficient for the set of local Pauli observables. This and other protocols based on random single-copy Clifford measurements can be understood as arising from fractional colorings of a graph $G$ that encodes the commutation structure of the set of observables. Here we describe a framework for two-copy shadow tomography that uses an initial round of Bell measurements to reduce to a fractional coloring problem in an induced subgraph of $G$ with bounded clique number. This coloring problem can be addressed using techniques from graph theory known as \textit{chi-boundedness}. Using this framework we give the first triply efficient shadow tomography scheme for the set of local fermionic observables, which arise in a broad class of interacting fermionic systems in physics and chemistry. We also give a triply efficient scheme for the set of all $n$-qubit Pauli observables. Our protocols for these tasks use two-copy measurements, which is necessary: sample-efficient schemes are provably impossible using only single-copy measurements. Finally, we give a shadow tomography protocol that compresses an $n$-qubit quantum state into a $\poly(n)$-sized classical representation, from which one can extract the expected value of any of the $4^n$ Pauli observables in $\poly(n)$ time, up to a small constant error. View details
    Preview abstract The problem of learning to defer with multiple experts consists of optimally assigning input instances to experts, balancing the trade-off between their accuracy and computational cost. This is a critical challenge in natural language generation, but also in other fields such as image processing, and medical diagnostics. Recent studies have proposed surrogate loss functions to optimize deferral, but challenges remain in ensuring their consistency properties. This paper introduces novel surrogate loss functions and efficient algorithms with strong theoretical learning guarantees. We address open questions regarding realizable $H$-consistency, $H$-consistency bounds, and Bayes-consistency for both single-stage (jointly learning predictor and deferral function) and two-stage (learning only the deferral function with a fixed expert) learning scenarios. For single-stage deferral, we introduce a family of new realizable $H$-consistent surrogate losses and further prove $H$-consistency for a selected member. For two-stage deferral, we derive new surrogate losses that achieve realizable $H$-consistency, $H$-consistency bounds, and Bayes-consistency for the two-expert scenario and, under natural assumptions, multiple-expert scenario. Additionally, we provide enhanced theoretical guarantees under low-noise assumptions for both scenarios. Finally, we report the results of experiments using our proposed surrogate losses, comparing their performance against existing baselines. View details
    Inspect or Guess? Mechanism Design with Unobservable Inspection
    Azarakhsh Malekian
    Ali Daei Naby
    The 21st Conference on Web and Internet Economics (WINE) (2025) (to appear)
    Preview abstract We study the problem of selling $k$ units of an item to $n$ unit-demand buyers to maximize revenue, where buyers' values are independently (and not necessarily identically) distributed. The buyers' values are initially unknown but can be learned at a cost through inspection sources. Motivated by applications in e-commerce, where the inspection is unobservable by the seller (i.e., buyers can externally inspect their values without informing the seller), we introduce a framework to find the optimal selling strategy when the inspection is unobservable by the seller. We fully characterize the optimal mechanism for selling to a single buyer, subject to an upper bound on the allocation probability. Building on this characterization and leveraging connections to the \emph{Prophet Inequality}, we design an approximation mechanism for selling $k$ items to $n$ buyers that achieves $1-1/\sqrt{k+3}$ of the optimal revenue. Our mechanism is simple and sequential and achieves the same approximation bound in an online setting, remaining robust to the order of buyer arrivals. Additionally, in a setting with observable inspection, we leverage connections to index-based \emph{committing policies} in \emph{Weitzman's Pandora's problem with non-obligatory inspection} and propose a new sequential mechanism for selling an item to $n$ buyers that significantly improves the existing approximation factor to the optimal revenue from $0.5$ to $0.8$. View details
    Preview abstract Most contextual bandit algorithms assume that rewards are bounded uniformly, or with a high probability for each context and action. Furthermore, the theoretical regret, as well as empirical performance of most prevailing methods scales polynomially with this reward scale. In this work, we use ideas from robust mean estimation to design contextual bandit algorithms where the regret has only a mild logarithmic scaling on the reward scale, with a polynomial dependence on the second moment rather than the maximum reward value. Our algorithm is based on Catoni's mean estimator, is applicable to arbitrary non-linear function classes, and we present both variance aware and variance agnostic versions of our approach. View details
    Enhanced $H$-Consistency Bounds
    Anqi Mao
    Proceedings of the 36th International Conference on Algorithmic Learning Theory (ALT 2025)
    Preview abstract Recent research has introduced a key notion of $H$-consistency bounds for surrogate losses. These bounds offer finite-sample guarantees, quantifying the relationship between the zero-one estimation error (or other target loss) and the surrogate loss estimation error for a specific hypothesis set. However, previous bounds were derived under the condition that a lower bound of the surrogate loss conditional regret is given as a convex function of the target conditional regret, without non-constant factors depending on the predictor or input instance. Can we derive finer and more favorable $H$-consistency bounds? In this work, we relax this condition and present a general framework for establishing *enhanced $H$-consistency bounds* based on more general inequalities relating conditional regrets. Our theorems not only subsume existing results as special cases but also enable the derivation of more favorable bounds in various scenarios. These include standard multi-class classification, binary and multi-class classification under Tsybakov noise conditions, and bipartite ranking. View details
    Faster electronic structure quantum simulation by spectrum amplification
    Guang Hao Low
    Robbie King
    Dominic Berry
    Qiushi Han
    Albert Eugene DePrince III
    Alec White
    Rolando Somma
    Physical Review X, 15 (2025), pp. 041016
    Preview abstract The most advanced techniques using fault-tolerant quantum computers to estimate the ground-state energy of a chemical Hamiltonian involve compression of the Coulomb operator through tensor factorizations, enabling efficient block encodings of the Hamiltonian. A natural challenge of these methods is the degree to which block-encoding costs can be reduced. We address this challenge through the technique of spectral amplification, which magnifies the spectrum of the low-energy states of Hamiltonians that can be expressed as sums of squares. Spectral amplification enables estimating ground-state energies with significantly improved cost scaling in the block encoding normalization factor Λ to just √2⁢Λ⁢𝐸gap, where 𝐸gap ≪Λ is the lowest energy of the sum-of-squares Hamiltonian. To achieve this, we show that sum-of-squares representations of the electronic structure Hamiltonian are efficiently computable by a family of classical simulation techniques that approximate the ground-state energy from below. In order to further optimize, we also develop a novel factorization that provides a trade-off between the two leading Coulomb integral factorization schemes—namely, double factorization and tensor hypercontraction—that when combined with spectral amplification yields a factor of 4 to 195 speedup over the state of the art in ground-state energy estimation for models of iron-sulfur complexes and a CO2-fixation catalyst. View details
    Preview abstract Julia's strength in mathematical computation and high performance makes it a popular choice across scientific fields, mostly due to its focus on mathematics in a broad sense and execution performance. It is a language of choice to implement new numerical algorithms, but it really shines in modelling for optimisation thanks to JuMP.jl and MathOptInterface.jl. These libraries are, first and foremost, made for mathematical optimisation (linear, mixed-integer, conic, etc.), yet they are now generic enough to support more paradigms, such as constraint programming. This talk will introduce the basic principles behind the current implementation of JuMP.jl and explain why and how they are very good matches for modelling using constraint programming… and solving using any kind of mixed-integer-programming solver. Constraint-programming solvers can also be implemented using linear programming, in a great collaboration between discrete and continuous optimisation. This talk will briefly explain the connection and its implementation in Google’s CP-SAT, a leading, award-winning constraint solver that uses linear programs in its solving process — a solver that will soon be available in Julia too. View details
    ×