BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//132.216.98.100//NONSGML kigkonsult.se iCalcreator 2.20.4//
BEGIN:VEVENT
UID:20260415T075915EDT-3536Zuog5S@132.216.98.100
DTSTAMP:20260415T115915Z
DESCRIPTION:TITLE: “Expanding the scope of post-selection inference”\n\n \n
 \nABSTRACT: Contemporary data analysis pipelines often use the same data b
 oth to generate and subsequently test a null hypothesis. This procedure is
  problematic as classical testing procedures that fail to account for the 
 fact that the hypothesis is data-dependent do not control Type I error rat
 es. This problem\, commonly referred to as post-selection inference\, is p
 ervasive in modern science. One way to perform valid post-selection infere
 nce is to test the hypothesis conditional on the fact that the data were u
 sed to select the hypothesis. However\, for the resulting conditional dist
 ribution to be tractable\, the selection event must be amenable to mathema
 tical characterization and multivariate Gaussianity of the data is typical
 ly required. In practice\, such assumptions are rigid\, and limit applicab
 ility. \n\nIn this talk\, I will discuss a sequence of projects that expan
 d the scope of post-selection inference through the careful use of externa
 l randomness. I first present “data thinning”\, a strategy for partitionin
 g each entry of a data matrix into two independent pieces\, one for explor
 ation and one for testing\; because the folds are independent\, any select
 ion algorithm can be used for exploration and classical testing procedures
  can be applied for inference. Data thinning enables valid post-selection 
 inference with data generated from a broad class of distributions\, both w
 ithin and beyond the exponential family\, and is particularly useful in in
 stances where the sample size is small\, the data are non-identically dist
 ributed\, or selection involves unsupervised learning algorithms. For sett
 ings in which data thinning is not available\, I present a second strategy
  in which each entry of a data matrix is partitioned into two dependent pi
 eces. As before\, I will explore the first to generate a hypothesis. Infer
 ence is conducted by orthogonalizing the second with respect to the first 
 under the selected null\, then testing if orthogonalization is successful.
  Together\, these frameworks provide analysts with a suite of tools for co
 nducting valid post-selection inference in diverse settings. \n\n🔗 Zoom: h
 ttps://mcgill.zoom.us/j/89001500476\n
DTSTART:20251205T183000Z
DTEND:20251205T193000Z
LOCATION:Room 1104\, Burnside Hall\, CA\, QC\, Montreal\, H3A 0B9\, 805 rue
  Sherbrooke Ouest
SUMMARY: Ameer Dharamshi (University of Washington)
URL:https://www.mcgill.ca/mathstat/channels/event/ameer-dharamshi-universit
 y-washington-369525
END:VEVENT
END:VCALENDAR
