This page describes school-related projects; my my other home page has more about my personal life and my non-school-related activities (such as writing computer games).
The inspiration for my research was working with complex programs. I wanted to find ways to better deal with complexity, both state (data) and modularity (code). I studied combining functional and object-oriented programming styles. Managing state in a program is tricky and the source of complexity and bugs. How do we deal with state? Object-oriented programming gives us one way to deal with state. Functional programming goes one step further and gives us ways to avoid having state in the first place.
In the programs I write, I’d like to avoid state as much as possible, but when I do have state, I want a nice way to manage it. During my time at Stanford I looked at ways to use functional programming and object oriented programming together, with the hope that two simple systems working together can be more expressive than a complex functional system or a complex object system. For example, inheritance is a transformation from one class to another; what if you can use recursion, composition, and other operations on inheritance to build new classes?
I found that in my programming projects I made a big distinction between objects and values (now called “value objects” by those who want everything to be an “object”). Objects have identity and polymorphism and state; values are serializable and copyable. When you try mixing these, you get weird behavior, such as C++ copy constructors “slicing” objects. For my research I kept the two separate, using functional programming for values and object oriented programming for objects.
Many years after I left Stanford, I became interested in a different way to manage state: combining functional programming, used for computation, with relational programming, used in databases to manage state. Note that SQL was developed in the 1970s and is not a great example of a relational system, but it’s the only experience most people have with relational systems.
There is a mismatch between objects and databases. None of the object relational mapping systems are great. Some people are trying to solve this by making the database object-oriented. What if we instead solve it by making the programming language relational? The game industry is exploring entity systems, which are somewhat along these lines. Also potentially interesting: functions are a type of relation, implicitly defined; databases are a type of relation, explicitly defined. Joins on tables seem to be related to applications of functions. Is there a unified theory that would allow us to mix functions and tables?
Looking back on my life, there were several things that led me to study certain aspects of object-oriented programming in graduate school.
Learning Scheme affected me a great deal, and learning ML too. Both made me appreciate clean semantics of languages, and made me question many things. For example, in ML (and to some extent in Scheme) a variable is actually the composition of two things: a name and a “cell” that can be modified to store a value. By separating the two notions, you can start using them in new and interesting ways. For example, you might have two names for the same cell, or you can have names without cells, or you might have cells with no names. Scheme and ML are full of things like this that make you think about the fundamental building blocks for languages.
In the late 80s I learned object-oriented programming. As I used it, I started to question some of the standard approaches. In particular, I didn’t see “everything is an object” to be a virtue, especially when everything used inheritance. I started to explore a mixture of objects (focusing on composition and polymorphism instead of inheritance) and “plain old data” types. When I discovered C++’s STL, I learned that a method in a class is less reusable than a regular function outside a class. As I used C++ more, I started to want a “move” constructor. I found that certain operations just didn’t make sense all the time. For some objects, I want inheritance, polymorphism, subtyping, “move” constructors, identity, mutability, and shallow equality; for others, I want copy constructors, operator overloading, immutability, typecase, and deep equality.
I also saw a connection between these two different styles of programming and the styles of programs on different operating systems. Windows and Mac applications are “object oriented” in the sense that the application controls access to your data. Typically only one application can access that data, and you can only manipulate the data by using the app. In the Unix world, data is typically visible (plain text with minimal structure) so that you can work with it in many different apps — text editors, grep, less, awk, perl, etc. This is like the data value approach. The web also follows the data value approach.
When I started playing with multithreading, regular objects needed locking and data did not. When I played with persistence and networking, some objects were serializable and some were not. When I started playing with distributed systems, some objects can be copied between threads, processes, and machines; others should not (they may migrate, however). When I learned linear logic, the distinction continued, as some objects were linear in nature and others were not.
In each endeavor, the distinction was clear: there are two different kinds of objects, and the two kinds were in sync: the same objects that were not copyable in distributed systems were the ones that were not serializable, needed locking, linear in nature, had intensional identity, used polymorphism, and so on. There were only two kinds of objects that I could see, and those two kinds would work across many different problem domains. A language that pretended that everything was the same (“everything’s an object!”) doesn’t match the kinds of things I was seeing. So I started thinking about a language that treated “objects” and “data” differently. When I went to graduate school, I had my chance to work on such a language. I started with a non-object-oriented language and added a minimal object system that only handled the “object” features and none of the “data” features. The result kept both objects and data simple, but let them be combined in useful ways.
The biggest project I was working on at Stanford was Obstacl, an attempt to add object-oriented features to ML, while keeping in mind the needs of programmers, and at the same time keeping in mind the flavor of ML.
By studying design patterns and general program design issues, we found that we could express many common patterns with just two (fairly simple) language features. These language features turned out to be interesting on their own, and you can read about one of them (mixins) in a paper I’ve made available on the web (see below).
It turns out that ML (and other high level languages) have features that can be used as building blocks for objects. Encapsulation and dynamic lookup (polymorphism) are already present in ML, and inheritance and subtyping are not too difficult to add. We were able to add to the language object-related constructs that did not intrude on the functionality provided by existing features. As a result the added features (classes and objects) are simpler than those in many other languages. For example, our classes do not provide functionality associated with modules, because ML already has modules. Our objects do not have to provide new semantics for dynamic lookup because ML already supports it.
Digging deep into issues of equality, identity, copying, and assignment, I found that all the languages I had used treated these incorrectly! Could I be wrong about everyone else being wrong? I worried about that quite a bit but every time I tried to formalize the issues, I came to the same conclusion: that objects (bank accounts, sockets, television sets, etc.) and traditional values (numbers, sets, lists, maps) should be treated rather differently in programming languages. There is at least one consistent model that can incorporate both objects and values; I’ve constructed such a model for Obstacl. Not only does it touch objects/values, equality (deep/shallow, reflexive/symmetric/transitive), identity, assignment, this model also touches mutability, subtyping, efficiency of implementation, serialization, and distributed programming. The margin of this page is too small for me to explain it; if you’re interested, please email me. Or read Henry Baker’s paper to learn about another consistent model addressing similar issues.
In addition to the practical aspects of using a language for programming, I worked with Viviana Bono and Vitaly Shmatikov on an extension of lambda calculus that could express the basic constructs in Obstacl.
I’ve also investigated the implementation issues involved, and designed run-time data structures for classes and objects that balance small size overheads, small time overheads, and modularity. Unlike C++, our classes can be linked together using inheritance at link time, so they can be put into dynamically linked modules and updated in ways that would break C++ class updates. (For example, we can add new private fields to a base class, and derived classes do not have to be recompiled.)
Many times when proposing a language feature, it is worth analyzing it formally. The formal analysis often brings up dark corners that may lead to problems. Alternatively, it may be clean and beautiful—-evidence that the feature is nice in theory. For our ML objects, we have written some calculi to analyze the features we want to add to ML. During the analysis, we discovered that these features could be added to other languages as well—-by boiling down the extensions to the absolute core, we saw that they were not dependent on ML, even though ML made them easier to implement. In particular, mixins may be a welcome alternative to multiple inheritance in many new object-oriented languages, not only in Obstacl.
The main result of our calculi papers is that when we consider classes to be basic features, rather than simulated using objects, and when we consider imperative object languages, rather than functional object languages, we get a much simpler system. The object calculus community probably is not interested in this work but we considered it important to preserve and analyze the style of programming used in real programs, instead of a different style of programming valued in the programming language community.
We also included forms of modularity that programmers generally want in larger systems. For example, a subclass does not have to be recompiled if only private fields in the superclass are changed. Clients do not have to be recompiled if only protected methods are changed or if new methods are added. Also, we include modular object creation (using constructors) instead of requiring the user to initialize all the fields after an object is created. Many of these goals are taken for granted in the programming community, but not always in the language community, so we included them in our calculus to ensure that the simplicity of our system was not the result of leaving our important modularity issues.
Mixins are a powerful program-structuring tool. Imagine a scenario in which you have a Stream interface, a File class implementing that interface, and an EncryptedFile subclassing from File. Now suppose you want to encrypt a network stream instead. What would you do? You could restructure your program so that File and Encryption are unrelated classes, but there’s an alternative: mixins. You can change the class from EncryptedFile extends File to [For Any stream class S]: Encrypted extends S. Now EncryptedFile is simply Encrypted @ File. (I would use something other than @ but for web pages, @ seemed simplest.) You can also make an encrypted network stream with Encrypted @ Socket. You can simulate many useful forms of multiple inheritance by simply applying more than one mixin: Encrypt @ Compress @ UUEncode @ PGPSign @ File is a class that encrypts, compresses, uuencodes, and then signs data that is being written into a file. (If you are a Unix hacker, you may find this similar to combining filters at the command line through pipes.) In addition, you can do some things that are hard to do with multiple inheritance, like apply a mixin more than once: Encrypt @ Encrypt @ File is a class of doubly-encrypted files! (Note however that doubly encrypted files typically aren’t any more secure; it’s just a nice example of what you can do with mixins.) With mixins it’s easy to control the order and number of times mixins are applied. The paper listed above explains the semantics of mixins and the issues with type checking them. We also have a nice way to compile them (just once, unlike C++ templates).
For some time I worked on a web proxy that modified Java bytecode as it was sent from the web server to the web browser. The modified applet was restricted from certain activities, such as opening too many windows on the screen at once. These hostile applets are not doing anything that violates the type system or security checks made by the virtual machine, but instead they are using annoying or misleading behavior to potentially learn secure information or cause damage to the user’s machine.
Insik Shin added a user interface applet to control the behavior of the proxy. In addition, he used the proxy to modify Java bytecode so that it would bring up a window that let the user control (i.e., abort) annoying behavior. For example, the proxy modifies any applet that plays sounds so that along with each sound, there is a small pop-up window that lets the user stop or restart the sound.
The web proxy is part of the Pleiades Project.
As of January 1999, my Python proxy is no longer being used for research purposes. My proxy was being used as the prototype for a new proxy (written in Java), which will be used for research on mobile code. Jun Zhai wrote a proxy core, with support for multiple users and more advanced Java monitoring. Vijay Ganesh worked on a framework for Java filtering. I am still working on the Python version to improve it for personal use. Version 3 is far more modular, and somewhat more reliable. Version 3.5 supports more general web page transformation (for example, turning Slashdot from green on white to blue on gray, or rearranging table elements on Excite’s page), more interfaces (curses and Gtk), and DNS pre-fetching (scanning a page for hostnames and performing DNS lookup on them before the browser needs it). In February 2000, I started working on version 4, which supports HTTP/1.1 (persistent connections, chunking, compressed HTML). I hoped to add caching, pre-fetching, history (including searching pages you’ve visited recently), and more HTML modifications, such as highlighting words, rearranging tables, and installing style sheets to improve readability, but in today’s browsers these are better done through something like Greasemonkey.
I was also involved in a project to “compile” type rules for Java bytecode into a Java bytecode verifier. See Stephen Freund’s home page for some details about the type rules. I am not actively working on the project at this time.
- Parameterized Inheritance vs. Multiple Inheritance
- A Calculus of Classes and Objects (MFPS ‘99)
- A Calculus of Classes and Mixins (ECOOP ‘99) Final version Copyright © Springer-Verlag, available from their web site.