Research Highlights

Using the Universe as a Lab for Dark Matter Interactions, New Light Particles, and Neutrinos:

Dark Matter Baryon-Interactions:

  • Introduced a framework for model-independent constraints on dark matter-baryon scattering. We applied it to CMB data from Planck and Lyman-alpha forest data from the SDSS and showed that a baryon in the halo of a galaxy like the Milky Way cannot scatter from DM particles in the history of the universe. The shrinking of the canonical-WIMP parameter space from null LHC and direct searches, as well as possible difficulties for collisionless N-body simulations to reproduce observational data, provide motivation to consider stronger baryon-DM interactions. In work in collaboration with Kfir Blum and Marc Kamionkowski (published in PRD), I derived the strongest constraints to date on elastic scattering between baryons and DM for a wide range of velocity-dependent cross sections, using measurements of the CMB fluctuations by the Planck satellite and Lyman-alpha flux power spectrum measurements from the Sloan Digital Sky Survey (SDSS). These constraints imply, model-independently, that a baryon in the halo of a galaxy like our Milky Way cannot scatter from DM particles in the history of the universe. Recently, the EDGES collaboration has claimed a detection of neutral hydrogen in the early universe. Their measurement, taken at face value, disagrees with the standard prediction of the temperature of hydrogen gas, being colder than expected. Our work inspired a paper published in Nature as a potential explanation to the claimed observation, and a number of other papers.
  • Probing sub-GeV DM with cosmology (complementary to direct detection searches): in work with my group, we showed that cosmology is complementary to dark matter direct detection experiments for DM masses below a GeV. We analyzed CMB data from Planck and Lyman-alpha forest data from the SDSS in the context of the sub-GeV DM scenario. Our analysis is particularly interesting given that lighter DM masses have remained unexplored by current direct detection experiments. Our paper captured attention from the community since our constraints rule out the possibility of DM-baryon scattering explaining the EDGES claimed detection of neutral hydrogen. Furthermore, these scenarios are being proposed as the main driver of the dark matter science case for the next-generation CMB experiment, CMB-S4.
  • New limits on light dark matter – proton cross section from the cosmic large-scale structure: In this work, we set the strongest limits to-date on the velocity-independent dark matter (DM) – proton cross section σ for DM masses m=10 keV to 100 GeV, using large-scale structure traced by the Lyman-alpha forest: e.g., a 95% lower limit σ<6×10−30cm2, for m=100 keV. Our results complement direct detection, which has limited sensitivity to sub-GeV DM. We use an emulator of Lyman-alpha forest simulations, combined with data from the smallest cosmological scales used to-date, to model and search for the imprint of primordial DM-proton collisions. Cosmological bounds are improved by up to a factor of 25.

Freeze-In Dark Matter:

  • New channel for DM Freeze-in: together with Katelin Schutz and Tongyan Lin, we identified an additional production channel for DM produced through the freeze-in mechanism: the decay of photons that acquire an in-medium plasma mass. These plasmon decays are a dominant channel for DM production for sub-MeV DM masses, and including this channel leads to a significant reduction in the predicted signal strength for DM searches. The DM acquires a highly non-thermal phase space distribution, which impacts the cosmology at later times. This work was an Editors’ Suggestion in PRD.
  • The Cosmology of sub-MeV Dark Matter Freeze-In: In our previous work, we realized that dark matter could be made from decaying photons in the early Universe. This mechanism for making dark matter sits at the nexus of a lot of interesting properties: it’s one of very few allowed ways of making dark matter lighter than electrons from thermal processes, it’s the simplest allowed way to make dark matter with a small effective electric charge from a thermal process, and it’s also testable in the laboratory with some proposed experiments. In this work, we tested with cosmological observables whether or not this is the dark matter of our universe. If dark matter interacts with photons per its production mechanism, that also implies that dark matter and baryonic matter can interact at a small level. In this case, the photons can drag the dark matter (via the baryons). That means that our universe looks less clumpy. We tested this idea using data using CMB data from the Planck satellite. Because the dark matter is made off decaying photons, this means that the dark matter is born relativistic. By looking at the statistics of dark matter clumps, we can infer whether or not their growth was hindered. We looked for small galaxies with the DES survey, we used gravitational lensing, and also stellar streams. We tested a wide range of parameter space. So far, we excluded some of the parameters. In the near future, we expect to be able to test this idea over an even wider range of parameters.

New Light Particles and Neutrinos:

  • Finding eV-scale Light Relics with Cosmological Observables: Relics with masses on the eV scale (meV-10 eV) become non-relativistic before today, and thus behave as matter instead of radiation. This leaves an imprint in the clustering of the large-scale structure of the universe, as light relics have important streaming motions. In work with my group, we studied how well current and upcoming cosmological surveys can probe light massive relics (LiMRs). We considered minimal extensions to the Standard Model (SM) by both fermionic and bosonic relic degrees of freedom. We found that a very large coverage of parameter space will be attainable by upcoming CMB and LSS experiments, opening the possibility of exploring uncharted territory for new physics beyond the SM.
  • In this work, we present the first general search for LiMRs with CMB, weak-lensing, and full shape galaxy data. We demonstrate that weak-lensing data is critical for breaking parameter degeneracies, while full-shape information presents a significant boost in constraining power. Our constraints are the tightest and most comprehensive to date for scalars, Weyl-fermions, Dirac fermions, and vectors.
  • Scale-dependent galaxy bias induced by light relics: with my postdoc Julian Muñoz, we computed the scale-dependent galaxy bias induced by light relics (not limited to neutrinos) of different masses, spins, and temperatures. We also made publicly available a code (“RelicFast“) that efficiently computes the galaxy bias in under a second, allowing for this effect to be properly included in likelihood analyses with different cosmologies with light relics, at little computational cost.
  • Accurately Weighting Neutrinos with Cosmological Surveys: in this work with my group we found, through an MCMC likelihood analysis of future CMB and LSS data sets, that upcoming surveys will be able to “distinguish” neutrino hierarchies at the 1 sigma level. We further found that neglecting the effect of a growth-induced scale-dependent bias of halos produced by neutrino mass (studied in Muñoz&Dvorkin, 2018) can induce up to a 1 sigma overestimation of the total neutrino mass. We showed how to absorb this effect via a redshift-dependent parametrization of the scale-independent bias. To facilitate future data analyses, we released RelicCLASS: a publicly available code to compute CMB and LSS observables in the presence of massive neutrinos or any light relic.
  • I led the Neutrino Mass from Cosmology paper submitted to the US Decadal Survey, where I argued that our understanding of the clustering of matter in the presence of massive neutrinos has significantly improved over the past decade, yielding cosmological constraints that are tighter than any laboratory experiment, and which will improve significantly over the next decade, resulting in a guaranteed detection of the absolute neutrino mass scale.

Unlocking Dark Matter Physics out of the Uncharted Small-Scale Structure of the Universe:

Direct Detection with Strong Gravitational Lensing:

  • First attempt to infer the presence of dark matter substructure in strong lens images with a binary classifier, without having to do any intermediate lens modeling, using Convolutional Neural Network (CNN): together with my Ph.D. student Ana Diaz Rivero we trained a CNN to classify images based on whether they have detectable substructure or not. Tens of thousands of new lenses are expected to become available in the near future. The new and fast approach to analyze strong lens images proposed in this work is more suited to this new era of large data sets.
  • Detecting Subhalos in Strong Gravitational Lens Images with Image Segmentation: in these two works with my group, we use machine learning to circumvent the need for lens and source modeling and develop a method to both locate subhalos in an image as well as determine their mass using the technique of image segmentation. The network is trained on images with a single subhalo located near the Einstein ring. Training in this way allows the network to accurately detect entire populations of substructure, even for locations further away from the Einstein ring than those used in training. With good accuracy and a low false-positive rate, counting the number of pixels assigned to each subhalo class over multiple images allows for a measurement of the subhalo mass function (SMF). When measured over five mass bins from 109 to 1010 Msun, the SMF slope is recovered with an error of 26% for 50 images, and this improves to 10% for 1000 images with HST-like noise. In follow-up work, we trained a UNet on now more realistic systems, and found that our machine learning algorithm is able to quickly detect most substructure at masses higher than 109 Msun, with high image resolution and subhalo concentration.
  • Substructure Detection Reanalyzed: Dark Perturber shown to be a Line-of-Sight Halo: Observations of structure at sub-galactic scales are crucial for probing the properties of dark matter. It will become increasingly important for future surveys to distinguish between line-of-sight halos and subhalos to avoid wrong inferences on the nature of dark matter. In this work with my group, we reanalyze a sub-galactic structure (in lens JVAS B1938+666) that has been previously found using the gravitational imaging technique in galaxy-galaxy lensing systems. This structure has been assumed to be a satellite in the halo of the main lens galaxy. We fit the redshift of the perturber of the system as a free parameter, using the multi-plane thin-lens approximation, and find that the redshift of the perturber is consistent with it being a line-of-sight halo. The deflection angles caused by the interloper form a vector field with a non-vanishing curl which cannot be recreated by a single thin lensing plane. Moreover, the combined convergence of an interloper and a main lens differs from a naive sum of the two. These differences cause slight changes in the pixel brightnesses near the Einstein radius, which is what we used to constrain the redshift of the perturber. Our analysis also indicates that this structure is more massive than the previous result by more than an order of magnitude. This constitutes the first dark perturber shown to be a line-of-sight halo with a gravitational lensing method.
  • Probing Dark Matter with Strong Gravitational Lensing through an Effective Density Slope: Many dark matter (DM) models that are consistent with current cosmological data show differences in the predicted (sub)halo mass function, especially at sub-galactic scales, where observations are challenging due to the inefficiency of star formation. Strong gravitational lensing has been shown to be a useful tool for detecting dark low-mass (sub)halos through perturbations in lensing arcs, therefore allowing the testing of different DM scenarios. However, measuring the total mass of a perturber from strong lensing data is challenging. Over or underestimating perturber masses can lead to incorrect inferences about the nature of DM. In this paper, we argue that inferring an effective slope of the dark matter density profile, which is the power-law slope of perturbers at intermediate radii, where we expect the perturber to have the largest observable effect, is a promising way to circumvent these challenges. Using N-body simulations, we show that (sub)halo populations under different DM scenarios differ in their effective density slope distributions. Using realistic mocks of Hubble Space Telescope observations of strong lensing images, we show that the effective density slope of perturbers can be robustly measured with high enough accuracy to discern between different models. We also present our measurement of the effective density slope γ=1.96+/-0.12 for the perturber in JVAS B1938+666, which we find to be a 2σ outlier of the cold dark matter scenario. More measurements of this kind are needed to be able to draw robust conclusions about the nature of dark matter.
  • Detecting Low-Mass Perturbers in Cluster Lenses using Curved Arc Bases: Strong gravitationally lensed arcs and arclets produced by the mass distribution in galaxy clusters have been detected for several decades now. These strong lensing constraints provided high-fidelity mass models for cluster lenses that include a detailed census of the substructure down to 109-10 M⊙. Optimizing lens models, where the cluster mass distribution is modeled by a smooth component and subhalos associated with the locations of individual cluster galaxies, has enabled deriving the subhalo mass function, providing important constraints on the nature of dark matter. We explored and presented a novel method to detect and measure individual perturbers (subhalos, line-of-sight halos, and wandering supermassive black holes) by exploiting their proximity to highly distorted lensed arcs in galaxy clusters, and by modeling the local lensing distortions with curved arc bases. This method offers the possibility of detecting individual low-mass perturber subhalos in clusters and halos along the line-of-sight down to a mass resolution of 108 M⊙. We quantify our sensitivity to low-mass perturbers with masses M∼107−9 M⊙ in clusters with masses M∼1014−15 M⊙, by creating realistic mock data. Using three lensed images of a background galaxy in the cluster SMACS J0723, as seen by the James Webb Space Telescope, we study the retrieval of the properties of potential perturbers with masses M=107−9 M⊙. From the derived posterior probability distributions for the perturber, we constrain its concentration, redshift, and ellipticity. By allowing us to probe lower-mass substructures, the use of the curved arc bases can lead to powerful constraints on the nature of dark matter as discrimination between dark matter models appears on smaller scales.

Statistical Detection with Strong Gravitational Lensing:

  • New Formalism for dark matter (DM) substructure statistics proposed to discern among different DM scenarios: in work with my group, we developed a general formalism to compute from first principles the projected mass density (convergence) power spectrum of the substructure in galactic halos under different populations of dark matter subhalos. We constructed a halo model-based formalism, computing the 1-subhalo and the 2-subhalo terms from first principles for the first time. We found that the asymptotic slope of the substructure power spectrum at large wavenumber reflects the internal density profile of the subhalos, and proposed this as a key observable to discern between different dark matter scenarios.
  • In subsequent work, we applied our formalism to N-body simulations and found excellent agreement with our predictions. Furthermore, we found that we can gain important information about the nature of dark matter from larger scales (0.1 kpc-1<k<10 kpc-1) than what we previously claimed by comparing the amplitude and slope of the power spectrum from lenses at different redshifts. These scales are within reach of current and near-future observations and should be less sensitive to baryonic physics.
  • Line-of-sight halo contribution to the dark matter convergence power spectrum from strong gravitational lenses: in this work with my group we studied a novel observable: the contribution of the line-of-sight (LOS) halos to the convergence power spectrum. We showed that it is possible to define an effective convergence for multi-plane lensing systems with a dominant main lens coupled to lower-mass interlopers, and we tested our analytical results with mock lensing simulations obtained by doing ray tracing with the multi-lens plane equation, finding excellent agreement. We found that the LOS halo contribution can be significantly larger than the one from subhalos for many of the well-known systems in the literature (see here for an interactive version with different lens and source redshift, as well as different dark matter (DM) mass in substructure). Since the halo mass function is better understood from first principles, the dominance of interlopers in galaxy-galaxy lenses can be seen as a significant advantage when translating this observable into a constraint on DM. Furthermore, we pointed out that it is crucial to take the LOS contribution into account before making any claim about DM; otherwise, we risk wrongfully falsifying or reinforcing the standard LCDM scenario.
  • Inferring subhalo effective density slopes from strong lensing observations with neural likelihood-ratio estimation: In recent work with my group, we have proposed the subhalo effective density slope as a more reliable observable than the commonly used subhalo mass function. The subhalo effective density slope is a measurement independent of assumptions about the underlying density profile and can be inferred for individual subhalos through traditional sampling methods. To go beyond individual subhalo measurements, we introduce a neural likelihood-ratio estimator to infer an effective density slope for populations of subhalos. We demonstrate that our method is capable of harnessing the statistical power of multiple subhalos (within and across multiple images) to distinguish between characteristics of different subhalo populations. The computational efficiency warranted by the neural likelihood-ratio estimator over traditional sampling enables statistical studies of dark matter perturbers and is particularly useful as we expect an influx of strong lensing systems from upcoming surveys. Building up on this work, we demonstrated the feasibility of this method on real strong lensing observations. We use our trained model to predict the effective subhalo density slope from combining a set of strong lensing images taken by the Hubble Space Telescope. We found the subhalo slope measurement of this set of observations to be steeper than the slope predictions of cold dark matter subhalos. Our result adds to several previous works that also measured high subhalo slopes in observations.
  • Probing the small-scale structure in strongly lensed systems via transdimensional inference: We implement a pipeline to model strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows one to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require to model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps—for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope data set, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog.

Probing of dark matter fluctuations with 21-cm observations:

  • Model-agnostic probe of dark matter at small scales using 21-cm data: in this work with my group we studied how upcoming 21-cm measurements during cosmic dawn provide a powerful handle on the small-scale structure of our universe. Using both the 21-cm global signal and its fluctuations, we performed a principal component (PC) analysis to obtain model-agnostic constraints on the matter power spectrum, showing that they are mostly sensitive to wavenumbers k ~ 40 – 80 Mpc-1, which are currently unobserved scales. We have found that the 21-cm global signal allows us to measure 2 PCs with signal-to-noise ratios larger than five. The 21-cm fluctuations, on the other hand, allow for 3, 4, and 5 PCs to be measured under the assumption of pessimistic, moderate, and optimistic foregrounds. We projected several non-CDM models onto our PCs, finding that the 21-cm signal during cosmic dawn can improve the constraints on all of these models over other current cosmic probes, such as the Lyman-alpha forest.

Discovering New Physics from Current and Upcoming Cosmological Data with Novel Techniques:

  • Data Compression and Inference in Cosmology with Self-Supervised Machine Learning: The influx of massive amounts of data from current and upcoming cosmological surveys necessitates compression schemes that can efficiently summarize the data with minimal loss of information. We introduce a method that leverages the paradigm of self-supervised machine learning in a novel manner to construct representative summaries of massive datasets using simulation-based augmentations. Deploying the method on hydrodynamical cosmological simulations, we show that it can deliver highly informative summaries, which can be used for a variety of downstream tasks, including precise and accurate parameter inference. We demonstrate how this paradigm can be used to construct summary representations that are insensitive to prescribed systematic effects, such as the influence of baryonic physics. Our results indicate that self-supervised machine learning techniques offer a promising new approach for compression of cosmological data as well its analysis.
  • An Analysis of BOSS Data with Wavelet Scattering Transforms: We perform the first application of the wavelet scattering transform (WST) on galaxy observations, through an analysis of the BOSS DR12 CMASS dataset. This is the first cosmological analysis done at the field level of an actual galaxy data set. In order to capture the cosmological dependence of the WST, we use galaxy mocks obtained from the state-of-the-art ABACUSSUMMIT simulations, tuned to match the anisotropic correlation function of the BOSS CMASS sample in the redshift range 0.46<z<0.60. Using our theory model for the WST coefficients, as well as for the first 2 multipoles of the galaxy power spectrum, that we use as reference, we perform a likelihood analysis of the CMASS data and obtain the posterior probability distributions of 4 cosmological parameters, {ωb, ωc, ns, σ8}, as well as the Hubble constant, derived from a fixed value of the angular size of the sound horizon at last scattering measured by the Planck satellite, all of which are marginalized over the 7 nuisance parameters of the Halo Occupation Distribution model. We found that the WST delivers improvements in the values of the predicted 1σ errors compared to the regular power spectrum. Our results are investigative and subject to certain approximations in our analysis. In a follow up paper, we perform a reanalysis of the BOSS dataset using a simulation-based emulator for the WST coefficients. In order to confirm the accuracy of our pipeline, we subject it to a series of thorough internal and external mock parameter recovery tests, before applying it to reanalyze the CMASS observations. We find that a joint WST + 2-point correlation function likelihood analysis allows us to obtain marginalized 1σ errors on the ΛCDM parameters that are tighter by a factor of 2.5−6, compared to the 2-point correlation function.
  • Towards an Optimal Estimation of Cosmological Parameters with the Wavelet Scattering Transform: Optimal extraction of the non-Gaussian information encoded in the Large-Scale Structure (LSS) of the universe lies at the forefront of modern precision cosmology. In this work, we propose achieving this task through the use of the Wavelet Scattering Transform (WST), which subjects an input field to a layer of non-linear transformations that are sensitive to non-Gaussianity in spatial density distributions through a generated set of WST coefficients. In order to assess its applicability in the context of LSS surveys, we apply the WST on the 3D overdensity field obtained by the Quijote simulations. We find a large improvement in the marginalized errors on all cosmological parameters, ranging between 1.2 – 4x tighter than the corresponding ones obtained from the regular 3D cold dark matter + baryon power spectrum, as well as a 50% improvement over the neutrino mass constraint given by the marked power spectrum. Through this first application on 3D cosmological fields, we demonstrate the great promise held by this statistic and set the stage for its future application to actual galaxy observations.
  • Flow-based likelihoods for Non-Gaussian inference: in this work with my PhD student Ana Diaz Rivero we suggest using a data-driven likelihood that we call flow-based likelihood (FBL) to deal with known (or suspected) non-Gaussianities (NG) in data sets. FBLs are the optimization targets of flow-based generative models, a class of models that can capture complex distributions by transforming a simple base distribution through layers of nonlinearities. We point out that this is more accurate than other methods used previously to deal with NG in data sets. We apply FBLs to mock weak lensing convergence power spectra and find that the FBL captures the NG signatures in the data extremely well, while other commonly-used data-driven likelihoods, such as Gaussian mixture models and independent component analysis, fail to do so. Unlike other methods, the flexibility of the FBL makes it successful at tackling different types of NG simultaneously. Because of this, and consequently their likely applicability across datasets and domains, we encourage their use for inference when sufficient mock data are available for training.
  • A novel technique for Cosmic Microwave Background foreground subtraction: together with former undergraduate students at Harvard, Sebastian Wagner-Carena and Max Hopkins, and Ph.D. student Ana Diaz Rivero, we introduced a Bayesian hierarchical framework for source separation. We find improved performance of our algorithm when compared to state-of-the-art Internal Linear Combination (ILC)-type algorithms under various metrics: the root mean square error of the residual between the reconstructed CMB and the input CMB maps, the cross-power spectrum between the residual map and the foregrounds, and the difference between the power spectra of the input CMB and the reconstructed CMB. Our results open a new avenue for constructing CMB maps through Bayesian hierarchical analysis. This algorithm has been built to tackle one of the principal challenges in precision measurements of ISW signals, gravitational lensing, primordial non-Gaussianity, constraints on isotropy, etc.

Deciphering the Physics of the Early Universe:

Tests of Slow-Roll and Single-Field Inflation:

Primordial Non-Gaussianities:

  • Formalism for the bispectrum of inflationary models with features: I extended, together with collaborators, the “Generalized Slow Roll” formalism to the bispectrum in a series of papers (Fast Computation and Non-Gaussianity from step features). The Planck collaboration used our formalism to look for inflationary features and, more generally, it has been widely implemented to study different inflationary scenarios in the literature. These consistency checks were carried out by the Planck collaboration and other groups.
  • Precise and Accurate Cosmology with CMBxLSS Power Spectra and Bispectra: With the advent of a new generation of cosmological experiments, it is of paramount importance to exploit the full potential of joint analyses of multiple cosmological probes. In this work with my PhD student Shu-Fan Chen and my postdoc Hayden Lee, we study the cosmological information content contained in the one-loop power spectra and tree bispectra of galaxies cross-correlated with CMB lensing. We use the FFTLog method to compute angular correlations in spherical harmonic space, applicable for wide angles that can be accessed by forthcoming galaxy surveys, going beyond the usual Limber approximation. We find that adding the bispectra and cross-correlations with CMB lensing offers a large improvement in parameter constraints, including those on the total neutrino mass, Mν, and local non-Gaussianity amplitude, fNL. In particular, our results suggest that the combination of the Vera Rubin Observatory’s Legacy Survey of Space and Time (LSST) and CMB-S4 will be able to achieve σ(Mν)=42 meV from galaxy and CMB lensing correlations, and σ(Mν)=12 meV when further combined with the CMB temperature and polarization data, making it possible to distinguish between neutrino hierarchies without any prior on the optical depth.
  • Measuring the LSS and the CMBxLSS bispectra with weighted skew-spectra: To date, most of the cosmological information from the large-scale structure of the universe has been extracted from the 2-point clustering statistics. It is well known that there is a wealth of information in higher-order statistics, but extracting this information is more challenging than it is from the power spectrum, due to its significant computational cost. In work with my group, I used cross-spectra of the galaxy density field and weighted quadratic fields (“weighted skew-spectra”) as an estimator for the galaxy bispectrum, and showed that the skew-spectra statistics can recover the predictions from the bispectrum (both the primordial one and from gravitational evolution). Computationally, evaluation of the skew-spectra is equivalent to the power spectrum estimation: it can be computed with O(NlogN) operations, where N is the number of modes, as opposed to O(N2) operations, which are typically required for the bispectrum, making this estimator significantly more efficient. As a follow up, we applied for the first time weighted skew-spectra to galaxy survey (BOSS) data and put constraints on the equilateral bispectrum shape of primordial non-Gaussianity. In another work, we applied a modified version of the weighted skew-spectrum in harmonic space as a means to extract non-Gaussian information from the CMBxLSS. Our results suggest that for the combination of Planck satellite and the Dark Energy Spectroscopic Instrument (DESI), the skew-spectrum achieves almost equivalent information to the bispectrum on both bias and cosmological parameters.
  • Efficient method for computing cosmological four-point correlations: Angular cosmological correlators are infamously difficult to compute. Together with my postdoc Hayden Lee we introduced a method to compute in a fast and reliable way the angular galaxy trispectrum at tree level, with and without primordial non-Gaussianity, as well as the non-Gaussian covariance of the angular matter power spectrum, beyond the Limber approximation, applying the FFTLog algorithm. In the era of high-precision cosmology and large datasets, it is imperative to build efficient algorithms for calculating and estimating cosmological observables. As a follow-up, we introduced kurt-spectra to probe fourth-order statistics of weak lensing convergence maps. We used state-of-the-art numerical simulations and found agreement with our theoretical predictions.
  • Imprints of massive spinning particles in the large-scale structure of the universe: In work with my group, we presented a theoretical template for the bispectrum generated by massive spinning particles in the early universe, valid for a general triangle configuration of momenta, when the approximate conformal symmetry of the inflationary background is broken. We investigated the prospects of measuring these signals with upcoming galaxy surveys, and our results suggest that two next-generation spectroscopic galaxy surveys, DESI and EUCLID, could be sensitive to probing the effect of massive particles with non-zero spin.

Primordial Gravitational Waves:

  • Gravitational waves (analysis of BICEP and Planck data): In 2015, I joined the joint analysis between BICEP2, the Keck array, and Planck collaborations. I worked on the likelihood analysis of a multi-component model that included galactic foregrounds and a possible contribution from inflationary gravity waves. The code that I wrote was made publicly, and it has been extensively used by the community. We reported no statistically significant evidence for primordial gravitational waves and a strong evidence for galactic dust (published in PRL). My code was subsequently used in the subsequent BICEP/Keck collaboration papers.

Disentangling Inflation from Reionization Signatures in the CMB:

  • High-redshift ionization preferred by Planck data: the usual imposition of a steplike ionization history requires the optical depth to reionization to mainly come from low redshifts. Together with my graduate student Georges Obied and our collaborators, we relaxed this assumption and found that in the Planck 2015 data, there is a preference for a component of high redshift (z>10) ionization (early stars), in contradiction with claims made by the Planck collaboration. We found that marginalizing inflationary freedom does not weaken the preference for z>10 ionization. These findings prompted the Planck collaboration to revise their standard way of analyzing the reionization history and opened up an ongoing debate in the community.
  • New CMB B-mode contribution from patchy reionization: I showed that existing calculations of the B-mode polarization power spectrum from reionization were incomplete by finding an additional source of B-modes. These B-modes have been sought for in simulations by many groups.
  • New statistical technique for extracting the patchy reionization signal from CMB measurements: I developed a new statistical technique for extracting the inhomogeneous reionization signal from measurements of the CMB polarization. In this method, a quadratic combination of the E-mode and B-mode polarization fields is used to reconstruct a map of fluctuations in the CMB optical depth. This statistical technique has been widely used by the community, and it is one of the main ways in which CMB-S4 is planning to extract the inhomogeneous reionization signal from measurements of the CMB E-mode and B-mode polarization fields.
    In subsequent work, we showed that the cross-correlation of this optical depth estimator with the 21-cm field is sensitive to the detailed physics of reionization, and can be measured with upcoming radio interferometers and CMB experiments.