# Choosing “Big M” Values

I seem to bring up “big M” models a lot, so apologies if I end up repeating myself in places here. Not long ago, someone passed along highlights of a “big M” type model to me and asked if he could somehow reformulate to get rid of $$M$$. I did not see any good way …

More

# Adding Items to a Sequence

A question posed on OR-Exchange in 2017 asked the following: Given a tour of nodes, how does one best add two new nodes while respecting the ordering of the original tour. Specifically, the author began with a tour 0 – 1 – 2 – 4 – 6 – 0 (where node 0 is a depot) …

More

# NP Confusion

I just finished reading a somewhat provocative article on the CIO website, titled “10 reasons to ignore computer science degrees” (when hiring programmers). While I’m not in the business of hiring coders (although I recent was hired as a “student programmer” on a grant — the Universe has a sense of humor), I find myself …

More

# ICML notes

It’s never been a better time to work on AI/ML. Vast resources are being deployed in this direction, by corporations and governments alike. In addition to the marvelous practical applications in development, a theoretical understanding of Deep Learning may emerge in the next few years. The notes below are to keep track of some interesting …

More

# Selecting Box Sizes

Someone posted an interesting question about box sizes on Mathematics Stack Exchange. He (well, his girlfriend to be precise) has a set of historical documents that need to be preserved in boxes (apparently using a separate box for each document). He wants to find a solution that minimizes the total surface area of the boxes …

More

More

# Finding a “Core Point”

In a famous (or at least relatively famous) paper [1], Magnanti and Wong suggest a method to accelerate the progress of Benders decomposition for certain mixed-integer programs by sharpening “optimality” cuts. Their approach requires the determination of what they call a core point. I’ll try to follow their notation as much as possible. Let $$Y$$ …

More

# Gork revisited, 2018

It’s been almost 10 years since I made the post Are you Gork? Over the last decade, both scientists and non-scientists have become more confident that we will someday create: A. AGI (= sentient AI, named “Gork” :-)  See Rise of the Machines: Survey of AI Researchers. B. Quantum Computers. See Quantum Computing at a Tipping …

More

# Ordering Index Vector with Java Streams

I bumped up against the following problem while doing some coding in Java 8 (and using streams where possible). Given a vector of objects $$x_1, \dots, x_N$$ that come from some domain having an ordering $$\le$$, find the vector of indices $$i_1, \dots, i_N$$ that sorts the original values into ascending order, i.e., such that …

More

# Quantum Computing near a Tipping Point?

I received an email from a physicist colleague suggesting that we might be near a “tipping point” in quantum computation. I’ve sort of followed quantum computation and quantum information as an outsider for about 20 years now, but haven’t been paying close attention recently because it seems that practical general purpose quantum computers are still …

More

# Recursive Cortical Networks: data efficient computer vision

Will knowledge from neuroscience inform the design of better AIs (neural nets)? These results from startup Vicarious AI suggest that the answer is yes! (See also this company blog post describing the research.) It has often been remarked that evolved biological systems (e.g., a baby) can learn much faster and using much less data than existing artificial neural nets. …

More

# Creating A New MIME Type

I struggled a bit this afternoon creating a new MIME type and associating it with a particular application, so I’m going to archive the solution here for future reference. This was on a Linux Mint system, but I found the key information in a GNOME documentation page, so I suspect it works for Ubuntu and …

More

# Robot Overlords and the Academy

In a previous post Half of all jobs (> \$60k/y) coding related? I wrote In the future there will be two kinds of jobs. Workers will either Tell computers what to do or Be told by computers what to do I’ve been pushing Michigan State University to offer a coding bootcamp experience to all undergraduates who want …

More

# Benders Decomposition with Generic Callbacks

Brace yourself. This post is a bit long-winded (and arguably geekier than usual, which is saying something). Also, it involves CPLEX 12.8, which will not ship until some time next month. I have an updated version of an old example, solving a fixed charge transportation problem using Benders decomposition. The example (using Java, naturally) is …

More

As I noted in yesterday’s post, one of the major changes associated with the new “generic” callback structure in CPLEX is that users now bear the responsibility of making their callbacks thread-safe. As I also noted yesterday, this is pretty new stuff for me. So I’m going to try to share what I know about thread …

More

# CPLEX 12.8: Generic Callbacks

IBM is getting ready to release CPLEX 12.8, and I had the opportunity to attend a presentation about by Xavier Nodet at the 2017 INFORMS annual meeting. Here are links to two presentations by Xavier: CPLEX Optimization Studio 12.8 – What’s New and CPLEX 12.8 – the Generic Callback. As with any new release, there …

More

# What “R” qualitative research methods?

I recently stumbled upon a post on R-bloggers entitled “Qualitative Research in R.” This title got me pretty excited, since I’m generally excited about most things R and since I recently helped teach a qualitative methods course, which has had me thinking about adding more ethnographic and other qualitative elements to my work. I’d also heard of …

More

# AlphaGo Zero: algorithms over data and compute

AlphaGo Zero was trained entirely through self-play — no data from human play was used. The resulting program is the strongest Go player ever by a large margin, and is extremely efficient in its use of compute (running on only 4 TPUs). Previous versions of AlphaGo initially trained on thousands of human amateur and professional games …

More

# Information Theory of Deep Neural Nets: “Information Bottleneck”

This talk discusses, in terms of information theory, how the hidden layers of a deep neural net (thought of as a Markov chain) create a compressed (coarse grained) representation of the input information. To date the success of neural networks has been a mainly empirical phenomenon, lacking a theoretical framework that explains how and why …

More

# Where are participants in American and Canadian teacher hashtags?

My dissertation research is focused on Regional Educational Twitter Hashtags (RETHs), which are teacher-focused hashtags that are associated with particular geographic regions, such as American states or Canadian provinces or territories. This isn’t the first time that I’ve done a project on this phenomenon, and it’s rewarding to come back to RETHs to answer some questions that …

More

# A Gentle Introduction to Neural Networks

“A gentle introduction to the principles behind neural networks, including backpropagation. Rated G for general audiences.” This very well done. If you have a quantitative background you can watch it at 1.5x or 2x speed, I think :-) A bit more on the history of backpropagation and convexity: why is the error function convex, or nearly …

More

# Minimizing a Median

$$\def\xorder#1{x_{\left(#1\right)}} \def\xset{\mathbb{X}} \def\xvec{\mathbf{x}}$$A somewhat odd (to me) question was asked on a forum recently. Assume that you have continuous variables $$x_{1},\dots,x_{N}$$ that are subject to some constraints. For simplicity, I’ll just write $$\xvec=(x_{1},\dots,x_{N})\in\xset$$. I’m going to assume that $$\xset$$ is compact, and so in particular the $$x_{i}$$ are bounded. The questioner wanted to …

More

# DeepMind and StarCraft II Learning Environment

This Learning Environment will enable researchers to attack the problem of building an AI that plays StarCraft II at a high level. As observed in the video, this infrastructure development required significant investment of resources by DeepMind / Alphabet. Now, researchers in academia and elsewhere have a platform from which to explore an important class …

More

# Updated Stepwise Regression Function

Back in 2011, when I was still teaching, I cobbled together some R code to demonstrate stepwise regression using F-tests for variable significance. It was a bit unrefined, not intended for production work, and a few recent comments on that post raised some issues with it. So I’ve worked up a new and (slightly) improved …

More

# Rolling Horizons

I keep seeing questions posted by people looking for help as they struggle to optimize linear programs (or, worse, integer linear programs) with tens of millions of variables. In my conscious mind, I know that commercial optimizers such as CPLEX allow models that large (at least if you have enough memory) and can often solve …

More

# Helpful resources for principal components analysis in R

I’m currently working on my dissertation proposal, which has meant exploring principal components analysis. I’ve worked with PCA before, but it’s been a couple of years, so I’m trying to refresh my memory, improve my understanding, and get this proposal moving! Along the way, I’ve found (and been recommended) some helpful resources that I thought …

More

# Robots taking our jobs

The figures below are from the recent paper Robots and Jobs: Evidence from US Labor Markets, by Acemoglu and Restrepo. VoxEU discussion: … Estimates suggest that an extra robot per 1000 workers reduces the employment to population ratio by 0.18-0.34 percentage points and wages by 0.25-0.5%. This effect is distinct from the impacts of imports, …

More

# Don’t Touch the Computer

Under what circumstances should humans override algorithms? From what I have read I doubt that a hybrid team of human + AlphGo would perform much better than AlphaGo itself. Perhaps worse, depending on the epistemic sophistication and self-awareness of the human. In hybrid chess it seems that the ELO score of the human partner is …

More

# Memory Minimization

As I grow older, I’m starting to forget things (such as all the math I ever learned) … but that’s not the reason for the title of this post. A somewhat interesting question popped up on Mathematics StackExchange. It combines a basic sequencing problem (ordering the processing of computational tasks) with a single resource constraint …

More

# Super-human Relational Reasoning (DeepMind)

These neural nets reached super-human (better than an average human) performance on tasks requiring relational reasoning. See the short video for examples. A simple neural network module for relational reasoning https://arxiv.org/abs/1706.01427 Adam Santoro, David Raposo, David G.T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Timothy Lillicrap (Submitted on 5 Jun 2017) Relational reasoning is a …

More

# Probing deep networks: inside the black box

See also AI knows best: AlphaGo “like a God”: Humans are going to have to learn to “trust the AI” without understanding why it is right. I often make an analogous point to my kids — “At your age, if you and Dad disagree, chances are that Dad is right” :-) Of course, I always …

More

# Face Recognition applied at scale in China

The Chinese government is not the only entity that has access to millions of faces + identifying information. So do Google, Facebook, Instagram, and anyone who has scraped information from similar social networks (e.g., US security services, hackers, etc.). In light of such ML capabilities it seems clear that anti-ship ballistic missiles can easily target …

More

# Premature Obituaries

[T]he report of my death was an exaggeration. (Mark Twain, 1897) In a recent blog post, “Data Science Is Not Dead“, Jean-Francois Puget discussed and dissented with a post by Jeroen ter Heerdt titled “Data Science is dead.” Barring the possibility that Schroedinger shoved data science into a box and sealed it, both assertions cannot …

More

# Rise of the Machines: Survey of AI Researchers

These predictions are from a recent survey of AI/ML researchers. See SSC and also here for more discussion of the results. When Will AI Exceed Human Performance? Evidence from AI Experts Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, …

More

# Epistemic Caution and Climate Change

I have not, until recently, invested significant time in trying to understand climate modeling. These notes are primarily for my own use, however I welcome comments from readers who have studied this issue in more depth. I take a dim view of people who express strong opinions about complex phenomena without having understood the underlying …

More

# If This and That Then Whatever

I was asked a question that reduced to the following: if $$x$$, $$y$$ and $$z$$ are all binary variables, how do we handle (in an integer programming model) the requirement “if $$x=1$$ and $$y=1$$ then $$z=1$$”? In the absence of any constraints on $$z$$ when the antecedent is not true, this is very easy: add …

More

# Update Error: Wrong Architecture

Yesterday I ran into one of those mystifying glitches that, will infrequent, serve as a reminder that Linux is not for the faint of heart. When I booted my desktop system (Linux Mint 18.1 Serena), the system tray icon for the software updater was displaying a red “X” that indicates it tried and failed to …

More

# AI knows best: AlphaGo “like a God”

Humans are going to have to learn to “trust the AI” without understanding why it is right. I often make an analogous point to my kids — “At your age, if you and Dad disagree, chances are that Dad is right” :-)  Of course, I always try to explain the logic behind my thinking, but …

More

# Sharing “Implicit” Test Problems

The topic of reproducible research is garnering a lot of attention these days. I’m not sure there is a 100% agreed upon, specific, detailed definition of it, and I do think it’s likely to be somewhat dependent on the type of research, but for purposes of this post the Wikipedia definition (previous link) works for …

More

# Yann LeCun on Unsupervised Learning

This is a recent Yann LeCun talk at CMU. Toward the end he discusses recent breakthroughs using GANs (Generative Adversarial Networks, see also Ian Goodfellow here and here). LeCun tells an anecdote about the discovery of backpropagation. The first implementation of the algorithm didn’t work, probably because of a bug in the program. But they convinced …

More