Note from the Editor
We are announcing the winners of the IJOC Test of Time Paper Award once per quarter (with each issue of the journal) to cover the backlog of awards since the journal’s inception. The specifics of this award are as follows:
Number of awards: One per calendar year.
Goal: Recognition of a published IJOC paper that has proven impactful over a length of time. Considerations can be citations per year, downloads per year, influence of sparking new areas of research, practical implications, significance of findings, and so forth.
Criteria: All those papers published in the time window are considered. A paper can be recognized with this award only once. The time window is defined as a rolling window of 5 years starting 15 years ago.
Deadline: None. Papers are considered on an annual basis.
Selection: Small committee appointed by the editor-in-chief.
Recognition: Certificate of Test of Time Award (transmitted by email) and annual recognition in the journal (paper, authors, affiliations, citation).
Procedure: The set of papers published in IJOC during the time window with their citations per year (since publishing) will be sent to the committee members for their deliberation. A winner is selected by the committee, and the editor-in-chief is notified.
I am happy to report that the remarkably able committee, chaired by John Chinneck with members Bill Cook, Bruce Golden, Pascal Van Hentenryck, and David Woodruff, has selected the awardee, covering the period 2003–2007. What follows is the citation from the award committee and a reflection on the paper and this award by the authors.
I want to thank the committee for their continued efforts and am very pleased to share this recognition of the impactful heritage of our journal.
My best regards,
The Test of Time Award for papers published in the INFORMS Journal on Computing in the years 2003–2007 is awarded to the following:
Implementing Sponsored Search in Web Search Engines: Computational Evaluation of Alternative Mechanisms
Juan Feng, Hemant K. Bhargava, David M. Pennock
INFORMS Journal on Computing (2007) 19(1):137–148
https://doi.org/10.1287/ijoc.1050.0135
Test of Time Award Citation 2003–2007
This paper from Feng, Bhargava, and Pennock is among the earliest publications to study mechanisms for the allocation of ads by search engine providers. The paper reports extensive computational simulations, allowing the authors to compare expected revenue obtained from a variety of ad placement strategies. The study provides insights into the impact of attention decay and other key factors. This highly cited paper continues to be an important reference for the community of researchers studying this rapidly growing sector of the advertising market. The work is a very nice example of the reach of operations research techniques into the new economy.
Comments on the IJOC Test of Time Paper Award from the Authors, Juan Feng, Hemant K. Bhargava, and David M. Pennock
In the early 2000s, as we started this collaboration, internet search engines were only a few years young but had become a vital piece of the internet and our life more generally. We puzzled over how free search engines such as Google could make money, never imagining today’s annual revenues well over $100 billion. The initial answer came from a startup called Overture, spun off from an incubator in Pasadena and later acquired by Yahoo.
Overture pioneered text-based ads that were priced, sold, and implemented much differently than the “banners” in newspapers, magazines, and early websites, and Google perfected the practice. Instead of negotiating bulk sales, Google and Overture allocated ad slots one at a time using large-scale, continuous, real-time auctions. The setting allowed instantaneous measurement of the relevance and performance of ads, enabling search engines to collect money only when users actually clicked—about 40 cents per click on average in 2003. Our paper focused on a few key issues needed to make auction-based, pay-per-click advertising work well.
First, we discussed the game-theoretic nature of the problem involving three players: the search engine, advertisers, and users. Our model captured essential features of the setting while acknowledging plenty of missing complexities such as equilibrium play, affiliate partnerships, inexact query matching, budgets, click fraud, pay per conversion, and more. Our ability to formulate a reasonable model and delineate its caveats benefitted from our interdisciplinary backgrounds in business, information science, and computer science and our experience in the search advertising industry.
Second, observing that the two key companies employed quite different approaches, we sought to compare the performance of different ranking methods. Overture ranked ads in decreasing order of bids after editorial filtering, and Google ranked ads by bids times estimated click probabilities: effectively per-impression bids. Our paper explored the implications of picking one approach (rules that range over actual bids) versus the other (rules that combine bid and relevance). We showed that the correlation between relevance and bids was a crucial factor, for which positive correlation is most natural.
Third, we recognized the tension in employing relevance scores in this extremely dynamic growth phase of the industry, in which search engines were continuously faced with new bidders for which they had no information to compute relevance. Some of these bidders may well be a great fit for the end user, but how would a search engine know until it experimented with these new bidders? Frequently giving new bidders top billing would enable faster learning but risked losing revenue and user satisfaction if they proved to be less relevant, setting up a multiarm bandit problem. This part of our paper explored the implications of alternate methods for experimenting with new bidders, including a method for dynamically determining the number of trials to give to new participants. We modeled click probability with a relevance factor and a position factor: a crucial “separability” assumption that became standard in the literature.
A surprising fraction of the discussion in the paper remains relevant today. For example, reinforcement learning with stochastic rewards and utility functions that incorporate long-term metrics, including user satisfaction in advertising, remains an important and unsolved challenge. Still, the online advertising industry and, more broadly, the online market design community have progressed and expanded dramatically on many dimensions since our paper was published (and “plasma televisions,” one of our example queries, are extinct). Search engines today feature product listings, custom display features, reserve prices, and actions beyond clicks, including downloads, submits, likes, views, and purchases, initiated from GPS-enabled phones and digital assistants in our homes. Ads on social media and recommendation systems face related challenges.
We thank the editors of the INFORMS Journal on Computing and the members of the 2020 IJOC Test of Time Paper Award committee for selecting our paper for this award. We feel lucky to have found a rich problem area with practical implications and academic value in its infancy and are honored to be recognized. Almost 20 years and multiple jobs later, we remain excited as ever about the promise of market design as an engineering discipline.