Computer Science – Brian Kernighan on successful language design

Computer Science – Brian Kernighan on successful language design


>>DFB: The theme of today’s talk is “How to succeed
in language design without really trying” So could you please welcome back Brian Kernighan. [applause]>>BWK: Thanks Dave. Thank you very much for that
kind introduction. It’s a real pleasure to be back at Nottingham and to see the new buildings, as
opposed to the old ones, which of course I don’t remember after 30 years at all. And also just thanks
to the Computer Science department for the Honorary
Professor appointment which is definitely an unexpected pleasure — so, I really appreciate
that. OK so let’s see what we’re going to talk about.
We’re mostly going to talk about languages in some sense or another. To some extent
tools, as Dave hinted. There will actually be a taste of document preparation in it, but not the whole story.
I’ve had to do that because Dave is here and along with
others of his former students and others. So let’s start this way. So, this actually
comes from a real story at Princeton. A student — a computer science undergraduate — thank you
undergraduates for showing up, you can answer this question! So the
student, whose name I have now forgotten – this is only a few years ago — told me of
something that his room-mate was doing. The room-mate had been — was in some other
field. Geology — or something like that, and had been set this task of lots and lots
and lots of data. Find out all of the big-ish
volcanic eruptions like this. And so the question is: “How do you proceed
with a test like this?” And of course what my
young friend said, deprecating his room-mate who was obviously a
lowly Geology type, He said: “He was doing it by hand !”. And you think:
“Would you do this by hand?” . No way!
Nobody here would do it by hand? Right? So, everybody would do the same thing, I
predict. You would write a program. Then the
thing gets a little interesting. What language would you used to write it in?
So, a show of hands or shout out an answer — this is an audience survey — “What language would you
use for this task?”>>Reply from audience: AWK
>>BWK: AWK! Good man! AWK. Anybody else?
[laughter] I paid these guys ahead of time.
[laughter]>>Reply from audience: Shell
>>BWK: ‘bash’, OK. We’ve got a
shell expert here.>>SRB (from audience): Assembler!
>>BWK: OK, Steve’s a special case.
[laughter]>>Reply from audience: Basic — it’s the only one I still know.
>>BWK: Yeah! Basic would actually be a good idea.
You get the idea — there’s lots of different possibilities.
So anyway, I was once quoted — I think even accurately — as saying if I were marooned
on a desert island the language I would like to
have with me a compiler for would be C. And so I thought to myself, let me write it in C.
So I wrote it in C, like this. It’s sort of embarrassing, at this point, to
recollect just how long it took me to do this because, well, the first thing is
‘fgets’. I can never remember what the order of the arguments is. Is the file pointer first or
last — well, dammit, got that wrong first time and does it return EOF or NULL? And the answer is “yes”,
but not the one I picked first. and so I got that
wrong. I blew the ‘scanf’ because you have to say %lf and not
%d, or even %ld, and so I got that wrong, And then ‘strtok’
which has got to be the world’s worstly — worstly ?! — the wrong word. The world’s worst-
designed subroutine or function ever. Just couldn’t get it right. So, it probably took me 20 minutes, or half an hour
and, of course, the program isn’t very good because
it’s not very robust. If I give it very long input,
or malformed input, or something, we get the wrong answer. So generally this is not the right
choice of language for this sort of task.
So then I turned to my other favorite programming language — and I wrote it in AWK.
[laughter] And there it is. And it took, as you can guess, about two
seconds to write, worked perfectly the first time. It is robust. Nothing will go
wrong etc., etc. So this I think is a good
example of the kind of thing that you get with languages, or at
least things that you can think about. Notation makes a tremendous amount of difference
If you have a good notation it makes it a lot easier to do a job. And a lot of language
work is thinking about what is an appropriate notation for the kinds of
things that you want to do in a particular programming language. Another
issue, that is sort of related, is programmer
efficiency. How long does it take a programmer to write a piece of code? Can
you write it in 2 seconds or 20 minutes? That’s a big difference and so
if nothing else matters but the amount of time that the programmer takes to write it,
then you want something that’s very easy. And
so again there’s a set of tradeoffs that go into picking the language
that you have in mind. What about efficiency? Well, for a program like this, which is
probably run once, efficiency just doesn’t matter. Right, no matter what on a modern machine
it’s gonna run in a tiny fraction of a second and I probably spent more time doing C compilers than I
did in running the AWK version, even if I ran it
more than once. So efficiency doesn’t matter. Programmer efficiency is much more important than
machine efficiency for this kind of thing. But on the other hand there are plenty
of programs for which machine efficiency
actually does matter and I can assure you from
personal first-hand experience that Google doesn’t run its search engine with AWK, right?
They use C, or C++, or something like that. So, lots of different things that go into
this question of what do you do — what is your language of choice. So there are lots
of programming languages — you have lots and
lots of them to choose from, and in fact there’s so many. There’s an image that
I was trying to find my image here — here it is.
Everybody here is, I suspect, familiar with the
image from Genesis 11 — the image of the Tower of Babel, in which people are trying to build
the Tower to reach heaven. God gets kind of ticked off at this and says “I’ll spread
you out all over the world and we’ll give you lots and lots of different languages”. And this, of course,
was for natural languages but in some sense the same thing has happened with programming languages.
And so this Tower of Babel was actually a VERY popular
art form in the 1500s and 1600s. This is Pieter Bruegel the
Elder from the 1560s, or something like that. So this is a
famous one. There’s another one which I actually like, which is not so famous, and I
actually don’t know who the artist is. I’ve tried Google search on it and I can’t find whoever
that is. But that one is kind of nicer in some sense because the tower structure is more obvious and
there’s another reason why I like it and it’s this one. This is the cover of CACM
fifty-five years ago — this is January 1961. And 1961 you think, that is roughly
thirteen years after Maurice Wilkes ran the first stored-program computer — the EDSAC. Thirteen years later. Look at all
of those programming languages! And if it was a problem of that many programming languages
in 1961 what do you think the issue is today? We’ve got lots. Some of those are still alive — not
very many, but you’ll occasionally see SIMULA, down there,
LISP is still alive and well, I don’t know. But you
get the idea. FORTRAN? Oh yeah FORTRAN! FORTRAN is
still alive. I suspect BASIC is there too. Lots of these. And of course the image was used by Jean Sammet on the
cover of her book in 1969, which attempted to be an
encyclopedic survey of all the programming languages that
were in reasonable use at that time. Of course it would be inconceivable for her to do that same kind of book today.
Maybe we can do some classifications, so let me do a bit of, well either over-simplification, or revisionist history if you like. In the nineteen
forties — these decade boundaries are completely arbitrary in some sense. In the late forties we had
people writing absolute machine code — putting things into machines, in binary, using
either switches on a panel or perhaps, if they were lucky, paper tape, something like that.
That’s the way that the EDSAC was programmed
originally, if I understand it correctly. Very quickly people realized that you
could do better. That you could get the machine to do some of the work for you
by basically the clerical stuff: assembly languages to convert op-codes into bit-patterns;
convert addresses into where they are really going to be
in the memory and so on. Call that the 1950s. In the 1960s — and this is really the late fifties — the
thing that — probably the biggest single advance
in computing, at least on the software side, was the invention of high-level
languages, where you raised the level of discourse above the details of a
particular machine architecture and brought it to something that was closer
to the way that a real person would think about solving that kind of problem. So FORTRAN for
engineering kinds of problems; COBOL for business
kinds of problems; BASIC for instruction, things like that. So you lifted
it up. And this had a couple of real advantages. One is
that it meant that lots more people could write code — that you didn’t have to be an expert any more — or,
at least, not an expert in the assembly language of a particular machine. You could write your
program once and translate it — in your own terms — and translate it into a particular machine. And then the flip
side of that, of course, is that if you wrote it once in a high-level language
like FORTRAN, then you could translate it into whatever machine you happen to
run on. So this was a tremendous leap forward.
Now, every one of those things, Algol, FORTRAN
and so on were focused on a particular domain of applicability
to some extent. What happened in the 70s I think, is the development of languages that would
let you take on some of the hard-core computing
things that go into writing compilers, loaders, assemblers, editors and so on, and ultimately the
operating system itself. And that was the role of C,
primarily. C is certainly the survivor of that era, that you could write any of these really
core central components of computing, in a high-level language, so that was
very, very important. Now, what’s going on here? As time passes computers are getting more
powerful and more memory and that means the programs are getting bigger and it becomes harder and harder
to maintain intellectual control of the programs as they get bigger. And so we start to add
things to languages that help us maintain control of the structure of the code.
And that leads to object-oriented languages of
which C++, I suppose, is the dominant
example these days. In the 90s — I’m running out of things to describe these
with — sorry — but certainly strong typing in its various forms …
Finally somebody saw it! Thank you. [Audience member reacts to slide’s description of Java
as ‘strongly hyped’]
I thought it was worth a try.
[laughter] Languages like Java – and you notice that Java is … well
first, it is strongly typed. But the other thing is that it’s again taking advantage of the fact that hardware’s
gotten more and more powerful. You can afford to do something that isn’t compiled into
efficient machine code but in fact is an interpreter. Which draws away a factor of
something in speed, but is easier on programmers and makes portability very
much easier. OK — and I don’t know what happens
in the 2000s, and then the 2010s, well we’re only halfway through and it’s unclear
but there’s a lot of ongoing work in languages — mainstream programming
languages like this. Languages that will take on any task, reasonably well, over
the whole spectrum of things you might want to do.
Now, I don’t know what’s gonna happen with some of these new languages, I will call this
“totally up in the air”. There’s a parallel stream, which I think is
interesting, which is scripting languages.
And again loosely decades is not quite accurate. A
very, very early example of that is SNOBOL, a language that is, I think, totally dead at this point
although you never know. And then the people in the 70s discovering that you could program in
the command interpreter of the operating system — something that, sort of, in retrospect
is obvious but seemed like a neat and new idea at the time. And then we get languages like AWK
and other scripting languages, sort of in the 80s.
AWK dates from roughly 1977. And then we get three scripting languages and I
probably should put Ruby up here, or something
like that as well, and then JavaScript. JavaScript, in fact, started in 1995 but is the dominant
scripting language, I would say, and certainly on the Web ever since. And I don’t know what’s gonna
happen again in the decade that we’re in the middle of. So we
have, so far: mainstream programming languages, scripting languages. What’s a scripting
language? I’ve always liked Larry Wall’s definition
of scripting languages but we can be a little more precise than that, perhaps. I think the
fundamental thing about scripting languages is that text is a fundamental data type in
the language, that they manipulate text as well as numbers and that’s different than let’s say, C,
where text is kind of an afterthought almost in some ways; it’s not a natural act to
do text processing in C, although lots of
people do. Regular Expressions tend to be an integral part
of scripting languages; sometimes they are libraries, like they are in Python, or I guess
in PHP, and sometimes they’re built right into the language, in some sense, as
they definitely are in AWK and in Perl, and I think
Ruby and to some extent in JavaScript. Either way they are part of it in a way that
they really weren’t in mainstream programming
languages. All scripting languages have associative arrays, sometimes as the only data
structure for manipulating more than one thing
at a time. And all scripting languages tend to play fast and loose
with types. So the type theory that
you might have in something like SML or OCaml or Haskell, or whatever,
— type inference and so on — minimal in
scripting languages, often nonexistent at all — whatever it is it’s OK, we’ll
manipulate it. And then usually interpreters, taking
advantage of the fact that machines are much more powerful, so you can
actually don’t have to worry too much about
how fast something runs, it’s good enough. So there’s a third stream that fits some of the folks here, which I’m leaving
blank, The reason I’m leaving it — there are two reasons I’m leaving it blank. One is that
I don’t know enough about it to say anything intelligent, so it’s probably
better not to reveal my ignorance and the other is that I think, realistically,
functional programming languages starting with LISP and going on to things like Haskell
and OCaml today, are relatively little used in what I will call “the real world”,
i.e., not in a University. If you picked a random hundred programmers on the streets of Nottingham, or whatever, I
guess that five of them would be functional programmers, at most. It would
be down in the noise in that sense.
So, the flip side of that is that everybody profits tremendously from the kinds of things that
have been learned by research in functional programming languages. All kinds of
interesting things that have been first explored in those languages have found
their way, over the years, into mainstream programming languages. I guess you could start with recursion
as an early example. Garbage collection;
I would guess type inference and functions as first-class
citizens. All of those things started in functional programming languages and then find their way
into mainstream practice because they’re such a good idea. And then, finally the
fourth stream and this comes to what Dave was talking
about a few moments ago, this idea of domain-specific languages: languages that focus not on
taking on all kinds of programming problems but rather on a fairly narrow,
specific domain, in some sense. The languages need not even be Turing complete;
in other words you might not be able to simulate anything with them. And there are lots and lots of
examples of those and we mentioned regular
expressions, of course, And the shell and AWK have that property. Lots and lots of
document-preparation languages over a whole
spectrum of things. They are languages in the sense that they have
formal, rigorous grammars and well-defined
semantics at least in principle. Languages like that. SQL in its pure
form is a declarative language, but obviously a
language focused on database retrieval. R, a language for statistics, which I’m guessing many
people have used here. And AMPL, a language for
mathematical optimization, which, viewed from a distance,
is sort of analagous to R — big complicated algebraic
structure of things applied to a narrow domain. So I think what I’m going to do is to talk about some
of these kinds of things, focusing briefly on AWK, and then on AMPL, and then
come back to some of the document preparation kinds of
things. That’s where we’re going. One of the interesting things about
domain-specific languages and something I would actually like you to try and
take away today. They are smaller and simpler than big mainstream,
general-purpose languages and that means that it’s actually quite reasonable that
somebody in this room could say, you know, “If I designed a language to solve a
particular problem I could actually make my job easier”. And so you can think of
building a small language that focuses on some task and that’s pretty reasonable.
Thinking I’m gonna build the next version of C++ — not so reasonable. I’m not
knocking you — but that’s harder. Something small, something of the scope of some of
these things like regular expressions or
some document preparation tool, much easier and well within the grasp of anybody in
a relatively short period of time — a good way to attack problems. I’ve been
involved with the design and
implementation of special-purpose languages like this for a very long time
over a whole spectrum of activities. I’ve probably done eight or ten of them,
depending on how you count. A couple of them
actually survive quite well; they’re really used, people use them today. I noticed several
people — thank you — said AWK would be the way
they would approach a problem. So AWK is an example of a survivor. I have done some languages
which were successful in their time, but the time has passed. And then there were others that
sank without a trace, without leaving any mark on the
world at all. So I won’t spend too much time on those. OK. But let me talk about some of
these languages and see what’s going on. As I do, think about what is it that might make a
language sort of succeed for something where other people would want to use it. You built it — other people would want to
pick up on it. Or what might you do that
would mean that other people would never use it. So think about the success
and failure. Think about how well the notation of the
language matches the tasks that it’s meant to take on. That’s the sort of thing that I’m
going to talk about at this point. Now I keep harping on notation. I think it’s actually an important issue. Benjamin Whorf, who was an American linguist
lived in, roughly, the 1900 to
1940 — to 1945 — period, something like that. Benjamin Whorf studied mostly the
languages of Native Americans in the south-west, people like the Navajo, and he had
this wonderful remark about language shaping the way we think and determining
what we can think about. He was talking about natural language. I think
it applies even more strongly to the artificial languages that we create to tell
our computers what to do. And so it’s worth
remembering that. The other language-related quote that I’ve got here, is
one from Alan Perlis. Alan Perlis knew a thing
or two about language design. He was part of the original Algol group. He created one
of the very first and most influential programming
languages ever, and this observation that a programming language should change the way
you think about something, otherwise it’s not
worth worrying about. If you haven’t seen his epigrams on programming, Google it; go
look at it. There’s probably a couple of hundred
epigrams there. Most of them are quite perceptive and
many of them are actually kind of funny. And so it’s worth just looking at — a
great guy. So with all of that in mind, let me talk
then about something which everybody here
appears to be semi-familiar with. Actually, let me do another audience survey.
How many people here have used AWK?
Ha! I could skip the next 10 slides!
OK. Let me at least rip over them fairly quickly. So AWK derives from experience with the UNIX shell,
which was not actually very good for numbers. Somebody suggested that they would use
Bash or a shell for numeric computation. The shell is not very good for that. And I was interested in
that sort of thing when Al Aho and Peter
Weinberger and I were thinking about this. Al was interested in regular
expressions. He had just done a program called Egrep, which I suspect many of you
at least have heard of, if not used regularly.
And so he was very interested in that. Peter was interested in database
kinds of things and so the combination of three people interested in these sort of
peripherally related to each other issues led to this language which was meant for doing
these very simple one-liner kinds of programs, things that you thought should be only
a line of code. How do you design a language so they are only one line of code? So, data selection. Finding all of the
volcanic eruptions whose magnitude is greater than 6
is an example of data selection. Here’s a bunch of
data, give me the ones that match a particular condition. Sometimes you want to transform them
in some weird way. And sometimes you want to do a report generation kind of thing. Summarization. So this is
what AWK was meant to do and the things in blue are,
in fact, AWK programs that will do that. So the language itself uses a notational paradigm
which I think is extremely useful, very valuable. And if you look for it you’ll see it in a lot of
different places. It’s the idea of a pattern-action language. What you want to do
is you have a bunch of patterns. And for each pattern there’s a corresponding action and you just look
at the data and when you see an instance of the pattern you do the corresponding
action. And so that works for all kinds of things very nicely. Grep, for example, the
pattern is just a single regular expression and the action is: “Print it if it matches”.
Sed, a sequence of editing commands, those are
the patterns; the actions are whatever you do when you
see one of those patterns. You think about the YACC parser-generator. The patterns
are the grammar entities and the syntactic
rules of the grammar. Think about Lex, the lexical analyzer generator.
All kinds of things have that structure. So pattern-actions are a very effective notation.
People resonate with it because that’s the way we think. You see pattern-action, I guess, in the
pattern-matching stuff in functional languages
too. I’ll just say that just to make it look
like I know something — even if it’s not true. So anyway AWK is basically a bunch of pattern-
actions, like that. The patterns are
things like regular expressions or numeric tests like ‘$3>6’ and the action is a
language that looks like C. Another principle of notation, perhaps, is build
on what people are already familiar with.
Don’t invent something brand new if there’s something already there that’s
really pretty decent and that people
will know about already. And then the
structure says: “Automate what you can”, so that people don’t have to write
code that’s just boiler-plate when you know what you want to do anyway, which is to go
through all the data. So AWK is, in effect,
three nested loops: go over the files; in each file
go over the lines; in each line go over the patterns and, if
the pattern matches, do the action. And so
that’s the whole structure of the thing. We spent a lot of time when we were working on this trying to put features into the language
that made it easy to do these one-liner things, where something you know in your
heart it’s one line long. It really shouldn’t take more than that. What can you do in
the language design to actually make it so that it takes one line – and nothing more?
So we did a lot of things automatically. Reading through the input and splitting it?
Completely automatic — you don’t have to think about it. It just does the right thing
and that goes right down to splitting the individual lines into fields, because
people wanted to pick up the first field, the second field and so on. Variables in the language.
You need variables obviously but what we did was to say “Huh!, what goes into a variable?” Could be
a string, could be a number, we don’t care. Either
was fine. We’ll just try to keep track of it, and try to do consistent arithmetic, or other
operations, depending on whether it seems to be a string or seems to be a number. Don’t
worry about initialization of types etc. Build
some operators, throw some variables in for things that you already know. Why should the user have to
compute them if you know them yourself? That’s very easy. I mentioned associative arrays — I’ll come back to
that in a second. We need some kind of data structuring mechanism, something to handle aggregates of data, and the associative array is good for that. I
mentioned Egrep, that’s what Al was working on.
And in fact the first version of AWK — the current version of
AWK at least as I put it — is still Al’s code from
1976 with the same bugs in it as there always were.
Control — I’ve mentioned that. And so on. And so it’s all pretty straightforward. All aimed at making the
language so that something that was simple, in your mind, should be simple when you wrote the code.
So that in principle you could just type an AWK program
at the command line and have it work. Let me say just a word about associative arrays.
Associative arrays are THE most useful single data structure. Period.
If you’re only going to teach one data
structure in an undergraduate course, teach them the associative array, right? It’s the
only one that matters. All the others are derivative, in some sense. So here’s an
example of associative arrays, again with the
standard undergraduate computation: “What’s your budget for beer and pizza?”
So I’ve a bunch of records that say how much pizza
I bought and how much it cost. And I want to know the total at the end of it. Add up all the corresponding pizza
values and all the corresponding beer values. And here’s
the AWK program which does it. This is actually an action with no pattern. And so one of those
“Do it automatically things” is to say that when you have an action with no pattern, do it
on every input line. On every line it says:
“Increment the pizza element or the beer element by
whatever you see in the second field”. And then at the end go over them and just print the accumulated values.
And that’s so straightforward that I can actually type it correctly at the command line. Just do that
computation, which I do on a very familiar everyday basis.
So there’s lots of lessons that can be learned and I think I probably mentioned these over
and over again. Lessons that you learn if you do a
language and other people use it. Or even if they don’t.
But primarily if other people use it you learn things and I’ve seen the same
kinds of lessons show up over and over again. So let me sort of summarize them in the
context of AWK but the same thing happens
in other places. One is if you do anything useful people will abuse it. Guaranteed! And so AWK has been used for all kinds of
screwball things. We meant it for one-line programs.
So the typical AWK program is one or two lines long. I’ve seen AWK programs as long as 20,000 lines.
That is serious abuse — by somebody who’s obviously deranged.
[Laughter] So you can write all kinds of things in
it because it’s a string processing language and it’s pretty good, and associative arrays let
you do symbol tables very easily. And so on. I see programs
that generate AWK and have written them myself. That seems
like a very useful thing to do. And I’ve even seen people
learn to program starting with AWK. Which, as Jon and I
were talking earlier — and Graham maybe — about: “What’s the first programming language
here?”. And I’m gonna guess it isn’t going to
be AWK either. And something that’s important. If you
have a program which is used and people start to write programs that generate that,
you discover very rapidly that
machine-generated input is different from input written by people.
First it’s voluminous because machines are tireless in cranking the stuff out, and secondly
it doesn’t know about all the places where something
might not quite work right, or whatever. And so machine
generated output stresses programs very much harder than people do. And so it’s a good
way to test programs and find out when they don’t work. One of the very first — *the* first
implementation of the C++ compiler
that Bjarne Stroustrup wrote at Bell Labs. was written in C++, of course, but it generated C.
And so the output of the C++ compiler was C
that was utterly, unbelievably, inscrutable
to humans. And it was a stress test for compilers because
it had things — expressions that were nested twenty, thirty, forty, levels of parentheses deep,
with all kinds of weird names that were very similar and it just tended to break C compilers.
Machine generated stuff is just different. If
you design a language and other people use it, you very quickly learn that you
screwed up. There’s no shortage of screw-ups
in AWK. A couple of them I can take credit for myself; the others
I’ll pass on to my colleagues. And another thing
is this tension between changing something because you got it wrong or because you’ve
have a better idea, versus trying to keep it stable so that people can learn something don’t
have to keep trying — to keep up with your language. I don’t know how many
of you are writing Swift code to make your iOS applications, for example. But Swift
keeps changing and people have code that works and oops! it doesn’t work any more with
the next version. That’s a problem and how do
you stop that? And at the same time if you’ve done something wrong, or you need new
features, people want to add things. So there’s
this pressure, this continuous pressure for adding more stuff, but at the same time keeping
things small. Hard to do. And I always like
this observation from Tony Hoare about not including untried ideas of your own in a language. But somebody’s gotta try them.
How does it get started?! Anyway, I have one example of a perverse AWK program. I had several but
I think I’ll not impose on you. I’ll just show you the one.
This is from 99-bottles-of-beer.net. It’s a site that has programs that print the lyrics of 99 bottles of
beer on the wall, 99 bottles of beer … if one of those bottles should happen to fall
etc. You all know this drinking song? It has that in fifteen-hundred-odd languages and
it’s interesting — there are more weird languages than you could imagine — back to the Tower of Babel — and this is one of
the AWK versions and I rather like it.
OK, I want to switch gears, then, to a language which I’m guessing will be rather less familiar to everybody here. This is this language called AMPL. So has anybody here used AMPL? Total deadly silence. OK, I should have started
here. Skipped all that AWK stuff. So — everybody here knows what linear programming is,
right? Basically you’ve got a system of constraints
and you want to find a solution; you want a set of variables whose values, plugged in, satisfy all the constraints, and at the
same time minimize or maximize some objective function. So that’s fundamentally
what linear programming is. If the constraints and the objective are linear and then
you can have things where they’re not linear and you get other stuff like
integer programming, quadratic programming, and so on. How do you describe this system of equations
or inequalities and the constraints and so on?
How do you describe that?
In olden times that used to be done either with a program that was called
a matrix generator, which would generate a giant matrix of the coefficients of the
constraints. Or there was a really klunky language that made Fortran look elegant,
called GAMS, which was modeled on Fortran
but it was awful. So Bob Fourer and Dave Gay
and I worked on this. Bob Fourer was a professor until fairly recently
in the operations research — what was it called? — the Industrial Engineering and Management
Science department at Northwestern University near Chicago.
He spent his sabbatical year with us at Bell Labs in 1984, that golden era.
He had this idea for a language,
and Dave Gay and I worked on the design of the
language with him, and I made the first implementation of AMPL, which was my first C++ program,
so as you can imagine, it was pretty crappy.
So what AMPL is, it’s a language that lets you describe optimization problems in a notation that’s relatively
convenient for somebody who understands basically mathematics, simple mathematical notation. You start with a description of what the problem is,
and you write it down algebraically as I would write it on the board here or over there.
What the description of the problem was. We would transliterate that into AMPL, and then AMPL would take that
description of the problem, and the data for a specific instance of it, and turn that
into something that would go into an optimization program — a solver. The solver
would think for a while and then produce results back, and AMPL would reverse the process so
you could see the results in the form that you had originally posed the problem. That’s basically what it
does. Let me give you an example because that’s a lot of arm-waving. So here’s an example of a
classic optimization problem called the diet problem. The diet problem: you have to feed a large
number of people, and you want to make sure that you don’t give them too much of
fat, sugar and stuff like that, but you give them enough of nutrients they need, like various vitamins
and so on, and you want to do it at minimum cost. So think of it as how do you feed people
who live in dormitories or residences? The university’s desperately eager to pay
as little as they can to keep you guys alive
but they want to keep you alive, just barely.
OK, so that’s the idea. OK, so this is sort of an algebraic specification
of a diet problem. It says look, I’ve got foods and I’ve got nutrients,
and I’ve got an array which is how much of each nutrient is
there for any given kind of food, so it’s a two-dimensional array. Then I’ve got things like how much does a
particular kind of food cost, and what nutrients
does it contain, and things like that. Then what I’m want to do is figure out
how many packages do I buy.
Now this one is posed as I want to buy TV dinners, which were a popular form of … food — I guess, sort of food — back in the States.
I want to minimize the cost, which is how many
packages do I buy of each type and how much does that package cost. And I have some constraints, which are
basically I want to make sure I get enough nutrients
but not too much, for each of the various things. That’s fundamentally it.
That’s what I would write if I were a professor doing operations research and writing things on the board. And what AMPL does is to take that same thing and convert it into a language that you and I could, as computer people, say, “Oh, OK, I see what it is.”
I’ve got sets, I’ve got parameters, two-dimensional parameters, one-dimensional parameters.
I’ve got indexing, things like “for all the i in nutrients,
this has to be greater than that.” And then here’s what I want to buy,
I want to buy this which has to be at least this many but not more than that many. And I want to minimize the total cost, which is the sum of these things: the cost of a package times how many
you have to buy of the package. I have the constraints. It has to be at least this much
and no more than that much for each of the nutrients. You can see that the basic entities
are fundamentally indexing over sets, in a reasonably natural way.
It’s a purely declarative language at that point. It doesn’t say anything about the computational
process; it’s just taking data in and producing a big
matrix — very big but very sparse. A typical industrial linear programming
model might have, oh, 1,000 to 10,000 rows and 1,000 to 10,000 columns but
the density would be way under 1 percent. So very big but very sparse matrices. Then the solver thinks and back comes the value.
So that’s the fundamental AMPL language.
But then there’s something that’s kind of interesting about it
as well. I’ve got the data to worry about. Because that model would describe any number of
diet optimization problems, but specific ones,
what I would do to buy TV dinners is different from what I would do to feed
undergraduates, which is different than what I would do to run a high-end
restaurant. There’s all different kinds of things, different data. And so within AMPL
there is also a data specification language — various ways in which you can put in the data for this.
A lot of it just building tables. There’s lots of different ways to do it. So a modeling
language, a data description language and then of
course you have to actually run the silly thing, And so there’s yet another language, kind of
a command language, a shell level kind of subcommands, sort of like Git or something like that, where
there’s lots of subcommands that say “Here’s the model, here’s the processing I want you to run,
I want you to display the stuff in some format.” So you have all of these various things. The particular data like I gave you there,
which I’m not going to go into, says that the optimum cheapest diet
is nothing but macaroni and cheese. You probably knew that anyway.
You can see certain problems, like it says that you’re supposed to
buy forty six and two-thirds packages of macaroni and cheese. And when I last looked,
Sainsbury’s didn’t sell that, nor did Tesco. So in fact if you wanted
an integer solution, it’s pretty easy. You go back to here, and
all you would have to do is to say right here “integer” and it’s done. And if we do the translation into essentially
the same representation, it would go into a solver,
but the solver has to know how to do integer linear programming, which is a much
harder problem, because that’s NP-complete. So the model doesn’t change except for the
addition basically of one word. That’s the advantage
of having a modeling language like that. So AMPL, in spite of the fact that none of you
have ever heard of it, has been sort of successful.
I would call it a big frog in a very small pond, in some sense. It is
taught in university courses though apparently not here — a total failure of the British educational system.
[laughter] But it is taught in Princeton.
I didn’t know that before I got there. And it’s definitely used in industry. There’s lots and lots of companies that you have
heard of that use AMPL a lot. Major airlines use it for crew scheduling for
example; and major manufacturing companies use it for figuring out production line
stuff, and transportation companies in general use it for getting things from here to there. So it’s really used there and it supports this tiny, tiny company where I’m a
very inactive part of it. The language started out, as I mentioned, purely
declarative. You couldn’t say anything except “Here it is, go solve it” but it turned out that
wasn’t enough for the kinds of things that people wanted to do. So what happened
is that we had to add to it the trappings of
real programming languages, the things that make them Turing complete.
So it now has loops and conditionals and function call mechanisms and so on.
All of the stuff that you need.
The problem is that that stuff was added later, so the
syntax is really weird and irregular. And that I think is a general observation
about languages. You start with something that you think is relatively clean. It
has what would be called, I guess Fred Brooks called it “conceptual integrity”. And then you start
adding stuff to it and you lose that — whatever cleanliness there might have been originally. And certainly in AMPL I think that the
added stuff for loops and conditionals and so on
is really quite ugly. I didn’t add it, so I can’t take any blame for that. The other thing that’s kind of interesting
just off to the side, and we’ll come back to it, as a question of success or
failure. AMPL is proprietary. It is not open source. If you want it, you have to pay for it.
It’s what keeps this little tiny company going. We were talking earlier today about what’s
the role of Open Source and how do companies know what would have happened
if some of the things that people did in the early days were open source, or weren’t
open source, today. I’m guessing that AMPL and analogous things survive today
as proprietary or closed because it’s not worth anybody’s effort to just go
and build one for fun. Nobody here is going to put the effort into building a
replacement for this just as a labor of apply it again. And it really worked, both times. love because it’s too much work and so
perhaps we’re OK. But that’s a weak reed to stand on
and so it’s possible, at some point, somebody
will come along with something that is better
than AMPL or at least equivalent that doesn’t cost anything and is supported
and maintained. At that point the little tiny
company will go out of business. It will have
no effect on me so it’s OK.>>Question from audience:
“Couldn’t your company support itself
through consulting operations, which is what
a lot of Open Source companies do?”
>>BWK: It think that you could support yourself through
consulting operations. I that would be a reasonable alternative. It’s tricky and
I’m certainly not a business person here, but consulting is kind of a retail operation,
whereas selling software like this is closer to being a wholesale operation.
And so it’s hard to scale up consulting kinds of things and maintain even just
enough people to do it, let alone to do it well. But that is the model for many
open-source kinds of companies. I guess Red Hat
would be one of the more visible examples. They provide add-ons that cost money And I guess the sort of ‘freemium’ model that you see on
lots of Web sites has that property too. “We’ll give you the basic one; it’s fine; anything you want.
If you want something good, you have to pay us.”
And so you could imagine doing something like
that. So maybe the basic linear programming version of this thing free, for you especially, free.
But if you want to do stochastic programming, no way; you’ve got to pay. That kind of thing. So it could
work. As I said, I’m very disconnected from the company although I trade mail with Bob and Dave fairly regularly. I don’t know what their plans are for
that kind of contingency. It would be interesting to know. OK. Thank you. Now, we get to the document preparation
part of the talk. Not very much, but I wanted to
talk about a couple of things in document preparation. I worked on these kinds of
things back in the seventies and the eighties to some extent, and it’s part of Dave and I shared interest
for many many years, a source of good fun certainly for me
and I hope for him as well. These are a couple of languages that I want to talk about
very briefly, which were I think quite good at their time, but in various ways their time has passed, their ecological niche has disappeared, and so they have
languished, is probably the best thing you could say about them. The obvious one is this language called EQN, which I did with Lorinda Cherry. The idea is that if I want to
typeset complicated material – mathematical material – how do I do that?
The standard document formatting tools of the time when we did this didn’t have
any way to do mathematics. So I had the idea that mathemtics is spoken aloud by mathematicians and others when they turn their back
to the audience and scribble things on the board like this, they say “x sub i equals pi”.
Everybody knows what that
means. You don’t have to see what they wrote, because the spoken language of mathematics,
at least for simple stuff, is completely clear. And so the idea was to translate
that into formatting commands that would draw the mathematical expressions
appropriately. This coincides almost exactly with the invention of the pipe
mechanism in Unix, at a time when machines were very, very small with limited capabilities — the
first Unix machine had 64 K bytes of memory.
K; not M; not G. 64 Kbytes. So that meant that any individual program was going to
be fairly small and you couldn’t combine a bunch of features into one program because
you didn’t have room to run it. And so we were forced, I guess, into this idea of using the pipeline,
where there was a program to process the mathematical part and it fed stuff into the program that handled ordinary text.
And because they were separate programs anyway, you could think of doing a different language
where the notation was appropriate, where you could have notation that was sort of the
way mathematics is spoken by real mathematicians, and others: engineers, scientists, whatever. We did that, and it worked out
actually extremely well. The mathematics
mode in TeX derives from that. Don Knuth looked at it, said “Ah, OK”, and then went off and put his
own spin on it to make something that’s very much richer, more robust, higher quality output,
but fundamentally that same idea, something
that you could actually write fairly easily. So sometimes resource constraints actually work
in your favor. A couple of simple examples of EQN.
The interesting thing here — I talked about notation. Notation is important.
This notation is easier than TeX because it doesn’t do as much. But this notation has the
property that you can teach it to people who have no mathematical or background of
any sort. We routinely taught this to typists at Bell Labs, who didn’t know any
mathematics but they could learn this in an hour to produce the level of
complexity of mathematical documents that were standard at the Labs at the time. There’s no problem of how to teach them
that stuff and after a while they taught themselves. The notation is actually pretty easy and
straightforward. The implementation of EQN was based on YACC, the parser generator. I assume you’ve at least heard of YACC if not used it.
Another example of a specialized language, a language that takes a grammar
and converts it into a recognizer for things written in that language. EQN used that. I think EQN would not have come about if I hadn’t had YACC available because I couldn’t write parsers
to save my life. But I can get a program to write a parser for me;
that was really good. And so that actually worked out rather well. And it’s sort of weird now
but if you think — you put a paper in a
conference, the first thing you do is you write the paper, put it into PDF, and you
send it off to them, right? That’s the way it works.
That was a totally foreign concept forty-odd years ago. Gee! you did what Dave described — you got,
you know — there was a typesetter and somebody who did typesetting
professionally, and so on. This is the first, and I think probably the only,
paper that was ever published in CACM
where the typesetting was done by the authors, and not by some compositor,
because we wanted to show it off. And so we did that. And we made it look
pretty much the same way, modulo the fonts not being quite right, the same way as ACM looked. Of course today this is commonplace.
I don’t think journals work this way, but certainly conference proceedings absolutely do.
And so, a new idea at the time. but definitely moribund at this point, is this language
called Pic.
The other language that I want to talk about also,
I think kind of interesting in some ways, Dave mentioned the Linotron 202. We had done a lot of
interesting work on typesetting using a previous typesetter, a mechanical
nightmare called the Graphic Systems Model CAT. And the stuff that we had
done, which included Troff, and EQN, and Mike Lesk’s TBL program for doing
tables, was sufficiently useful — I mean documents at Bell Labs
were being produced by that stuff all the time — that we were able to convince our management
to spend 50,000 U.S. dollars in 1977 I guess, to buy this Linotron 202 device, on the grounds that we would then be able to produce even richer documents, things that had pictures in them
as well as mathematics and tables. And so to do that, I created this language called Pic,
which is basically a language for specifying drawings, line drawings like,
think flowcharts or something like organization charts, in terms of textual description of the pictures.
So rather than drawing something on the screen with the mouse, describe it in words, and a program, Pic, would convert
that into Troff commands that would actually draw it. So if you learn a good trick — pipelines,
separate programs, separate language — I’m going to give you just an example or two
of Pic in action. Here’s a really trivial one. Input, process, output — kind of generic.
That’s what we do here — input, process, output… teaching, programming, whatever. And so the language was meant to make
those kinds of things really, really easy, like that, and without having
to do anything in the way of arithmetic, it just sort of positioned things
at the right size in the right positions. It had more bells and whistles. And so here’s an example.
This is actually from a couple of years ago, three or four years ago, when I actually cared
about this. It’s an arbitrary picture of voltage over time. OK? And you can see in there an instance of
something I talked about before. The first version of Pic didn’t have
any notion of loops or conditionals. But that’s got a loop in it; it’s got a ‘for’ loop. Again… needed it, to be able to do
certain kinds of things, gets added, with a horrible syntax. So I learned a lesson but not very well. And I seem to have blocked off
the rest of it, but you get the idea about that. So there I have it, and there’s
that nice neat curve, written in Pic. So I could have drawn that thing
just as well freehand, or at least an artist could have drawn it just
as well freehand with a drawing system, and it would look perfect. But the thing that I wanted this picture for was
in fact the next picture, which is this one, where I want to show
what it means to sample a waveform by saying, you know,
“Here’s these little vertical places where you
sample the value of the waveform”, and that lets you reconstruct the waveform.
And doing that from the original Pic picture is trivial. Right? I just have to add a conditional in there that says,
“Every fourth line, please print it.” And that would be very hard if I’d had an artist
very carefully create that first curve, because then I’d have to pay the artist to go in
and very carefully do the vertical lines. And here, with those two pictures,
if I change my mind about the frequency or the shape of the curve or anything else,
it’s a trivial job to modify it.
So textual descriptions have a lot of advantages for that. Now something else that I mentioned is this idea of … machine-generated input for programs. So Pic is a language which lets you draw pictures.
There are certain specialized kinds of pictures that perhaps you could generate
the Pic for. And my colleague Jon Bentley had this idea of
“Gee, you know, graphs are a very specialized
form of picture.” All right? So let’s design a language that takes
data of whatever form, and converts it into neatly drawn graphs. And how will we do that? We’ll convert that stuff into Pic,
which in turn goes into Troff. So if an idea is good, making it recursive
makes it better. That’s what we did with that. That is kind of a
mediocre graph, but you get the idea of how this worked: machine-generated input.
So if a language is useful, then you can generate it by program. So, as I say, these are two languages that —
if you count Grap, three — that were I think quite useful in their time,
but their time has passed.
And the reason that their time has passed is that the Troff time has sort of passed. Apologies to Dave, who still does even more on it
than anybody else, I guess. It’s sort of passed. It was actually
very very good for lots of things, but it’s been pretty much superseded
in most people’s eyes by TeX or LaTeX or something of that sort. And some of the good ideas are preserved,
and some of the good ideas are kind of
disappearing, unfortunately. But that’s a reason why languages fail sometimes,
is that their niche, their place in the ecology
where they fit, has gone away. So, why do languages succeed? Well, I’ve harped
on notation a fair amount. I think the big deal is the notation — it really has to solve a problem that
people care about in a way that makes it easy enough
to write the code. If it doesn’t do that, then it isn’t going
to fly at all, in some sense. I think it’s helpful to be what I would call
‘culturally compatible’. Languages based on C, where the surface syntax sort of
looks like C, have done pretty well over the years, if you think of C++, and Java,
and Javascript, and I suppose arguably Perl, and certainly AWK…
all of those things have taken that surface syntax so it’s familiar, mutated it a bit, but, you know,
fundamentally it’s the same thing. It’s not like you have to follow it.
So Python is different, and Ruby’s kind of a weird mishmash, I guess, etc…
we — go where you want. So it’s, I think — all of those things help a language
to succeed. If it’s familiar enough, then you’re probably willing to try it and
see if you can get over some kind of hump. And the other thing is that it has to be
environmentally compatible as well. You have — you don’t want to have to buy into
too much of a new way of doing business. One of the reasons why
C++ succeeded, in spite of its complexities and other properties, was that it fit
absolutely naturally into the C ecosystem.
It ran on Unix, libraries were compatible, the source code looked
very similar at one end of the usage spectrum, so it was a conscious engineering choice
by Bjarne Stroustrup to make it as close to C as he could, so that it would propagate that way;
you didn’t have to buy into a bunch of new ways of doing things. And the comparison between that and, say, SML
(Standard ML) at the time, which is absolutely concurrent in time, was that that required very much more buy-in,
more, you know, stuff you had to import into
the environment. So leaving aside even the unfamiliarity of the language,
it was just too hard to use, in some sense. This question of open source and — versus proprietary…
I don’t know. You could argue that either way, but I think today languages which are not open source
are going to have real trouble, for the most part, just because when it’s open source,
people are willing to invest something in it, because they know that it will — they don’t have —
well, first, they don’t have to pay for it in the first place, and whatever they do will help
improve it overall. So I think for the most part, open source is an important aspect of this. And then these sort of very amorphous things.
What’s the competition? So the competition for C back in the early to mid 1970s
as a system programming language was… Pascal?! People seriously suggested Pascal as
a system programming language. Pascal had lots of good things about it, but…
system programming? It’s a joke. You couldn’t possibly do it in that.
So that was an example. Oh, and the other competition for C, at sort of
the other end of that little spectrum, was languages that were typeless,
like BCPL, which was really elegant, but typeless language: didn’t match the architecture
of the machines at the time. So, weak competition. Think about Javascript, versus Java, on browsers. Java got there first, but wasn’t
the right kind of competition. Javascript did better, because it was enormously easier
for people to send Javascript source code to a browser, rather than having to compile something
and then send the compiled result to the browser. So Java never took off, really, on the browser side
of things; Javascript always won there. It helps to have good luck. C had good luck in timing.
It came along with minicomputers. Then it got a new burst of life with workstations;
think of the Sun workstations. And then it got another burst with the PC.
And then it got another burst with embedded systems. So C has kind of been surfing the wave for a long time,
getting that fresh wind each time. Think of Objective C, if you can. Objective C… kind of a C++, not look-alike,
that’s the wrong characterization — but, you know, attempting to compete in the
same space, pretty much unused. And then it was picked up to run on the NeXT
Workstation, by Steve Jobs, and then carried into Apple, where it was used
for a very long time (and still is, obviously) to create software for MacOSX,
and, to some extent, iOS. What’s going to happen when Swift comes along?
I think Swift is probably… you can correct me, I think probably going to be the death of
Objective C over the next — pick a time period — 5 or 10 years.
I don’t know for sure, but I will guess that might be the case. But the success of Objective C, and then, was luck,
you know, just being in the right place at the right time perhaps, and then I think its death will be its
niche probably goes away. And I don’t know how any of this stuff will work on new
languages, like Rust, or Go, or Dart, which I
mentioned earlier. And then why do languages fail? Well, I already
told you some of the answers. The domain disappears. That’s what happened to
a lot of the Troff-related tools. Just — it’s gone away. The engineering might be
bad: languages are too big, or too complicated. Algol-68 is a wonderful example
produced in the United Kingdom. PL/1 is a wonderful example produced
in the United States. Too big, too slow, too late, too complicated —
all kinds of things like that. What about Perl 6? We would call that “too late”? We would probably call that too late.
I think Perl’s time has passed, because it’s stopped evolving in some sense, and
Perl 6 is never going to arrive, really — and so Perl’s, in some sense, missed a boat. Permanently,
I don’t know; it’s still a very useful language, but that will be interesting to see what happens. How about C++? Too big? Awfully big?
Awfully complicated? Is it going to go away? Probably not because the embedded base. There’s an
awful lot of C++ code out there, and people are writing a fair amount of new stuff. But you wonder whether it’s
passed some threshold of complexity that’s beyond mortals,
at least ordinary people. And then there’s questions of the sort of philosophical
choices. Some languages take ideology a little too far. to functional languages in some sense.
A lot of people — me, in particular — do not think in the sort of mathematically rigorous
and abstract way that you need to be a functional programmer. So, who knows. Lots of different reasons why languages potentially fail. Now I think very few languages ever disappear totally.
So that’s why I said “fail to thrive”. They don’t disappear; it’s just that they don’t grow.
And their user population shrinks. Let me close with an observation from Alan Perlis,
which is absolutely true. I don’t care what your language is; you can write lousy
code – which is, I guess, another way of what he said. But what it suggests to me is that there is still
lots and lots and lots of room for going off and inventing, creating new languages
that in some sense try to combine the best stuff from what went before,
but have some new take on it that makes it interesting and better
in some sort of way. The “too mathematical” brush, you could, I suppose,
arguably apply So probably if you want to do this, the place to start
is with domain-specific languages, rather than big general-purpose ones.
Don’t build the next C++, but build the next — I don’t know, regular expressions — you know, something small. And you might discover that it’s an enormous amount of fun, and that you might have an effect on things,
and it’s certainly very satisfying. So, anyway, that’s all I wanted to say.
Thank you again very much for your hospitality and your attention. I really appreciate it.
[applause]

79 thoughts on “Computer Science – Brian Kernighan on successful language design

  1. Subtitles please?

  2. Comic Sans improves ALL presentations!

  3. Perl 5 currently has a yearly release cycle (with new features every release, while maintaining backward compatibility), and Perl 6 has a release planned for Christmas this year ( though frankly it is awesome already ) 「$ perl6 -e 'say ([+] 1..10**10_000).chars; say now – INIT now'」 「20000␤0.3792978␤」 That is it adds all of the numbers in the range from 1 to 10¹⁰⁰ in less than half of a second without truncation to a floating point number (ok it actually cheats, but this is the way you would write it if you weren't going to calculate it based on the end-points yourself). Plus Perl 6 has features that no other language has, like built-in grammar engine, and auto-threading, super-simple way of adding new operators just as you would add a new subroutine (almost all "normal" operators in the Rakudo implementation are actually created that way). So the crack at 57m32s about Perl missing the boat seems quite mis-informed.

  4. Sapir Whorf hypothesis is garbage in actual linguistics. I wonder if it really has place in programming languages.

  5. Thank you for making this talk available to the public.

  6. He just missed which languages OUGHT to go away ;-P of which C++, Perl, JavaScript and PHP should be on the list. As a recovering C++ developer of over 13 years I hope Rust kills it for the good of mankind.

  7. I don't know how I managed to miss this event!

  8. Paragon of font history using comic sans
    what a lhard

  9. He is such an inspiration. He is a senior citizen and still getting it in I.T.

  10. Editing the audio to filter background noise would be great, but hey, it's not that big a deal.

  11. Wonderful video, thanks you so much

  12. Pascal with a few (small) extensions/modifications worked fine as a systems programming language. So did Algol. Bliss was definitely used as a systems programming language. So was Mesa. So was PL/I (but that one was probably too complicated). Cobol with a few extensions has been used to implement a (fast!) Cobol compiler so it could probably also have been used as a systems programming language.

  13. I wrote this exact program in Java.

  14. Fasacinating. Thank you.

  15. Awesome, but that person rattling coins is working my last nerve :l

  16. I bought a wallet from the University of Nottingham and it fell apart within a year. This talk was so good that now I feel I owe you money.

  17. Try out my circuit simulator? Thanks! Type: 'Circuit Solver' by Phasor Systems on Google Play.

  18. 13:54 I've noticed something that could be a pattern. Many legendary scientists seem to be humble in general and are not afraid to admit that their knowledge is limited.

    On the other hand, there are many people that are very arrogant, in contrast to what they have achieved (i.e. not much).

    I couldn't expect less from such a legend…

    Thank you U.o.Nottingham for sharing this lecture.

    And thank you Prof. Kernighan, for C and Unix.

  19. Are Brian's slides available?

  20. Man…this guy is just incredible. He has almost a legendary status in the history of computer science yet he is so humble and so down to earth. His presentation skills are just awesome -I didn't loose focus even once during whole video, which happens every 30 seconds when the profs at my uni try to teach. He explained everything in very clear and concise way, no BS at all. No trying to sound smart, no cryptic explanations, no ego at all. I wish lecturers at my uni were a bit more like him…

  21. Perl6 FTW

  22. A legend. You gave us so much sir. I learned C just because a lot of modern langages I use (Java, C#) use its syntax and I thought it was a way to honour the creation of C that to learn C and C++ even if I don't use those at work, just to honor the great work. Thank you sir. My job and life today are the result of all that fine work you did with Ritchie 🙂

  23. He's the Pascal killer!

  24. like

  25. WOW!

  26. Amazing talk. Learned so much about the evolution of languages from the master himself. I'm now inspired to read about a bunch of languages that I've not explored 😀

  27. I could listen to Prof Kernighan speak all day long. What a clear thinker

  28. I vaguely recall watching this two years ago, now with some more general programming experience I find it much more informative, very good stuff.
    I even used AMPL about 6 months ago, super useful for supply chain optimization but the documentation is …haphazard, plenty of it but you must dig through hundreds of pages of tutorial style docs spread across two dozen PDF files, it really has no solid "man page" style syntax summary, doubly so for the data syntax.

  29. I think Pearl was simply displaced by Python and Bash(as most non-compact systems have moved to Bash from posix sh since the peak growth of Pearl), Python became preferred for the heavy scripting end because it is easier to read and thus maintain for subsequent users even though Pearl may have some advantage for the original authoring. Python(with all the extensions) also has more uses and users, simple populatrity.

  30. Epic video!

  31. Why a language succeeds: All those reasons, plus the creator needs to have a beard.

  32. This is like GOD has spontaneously decided to emerge in flesh and blood and talk to us. 😛

  33. Another reason I would add that (partly at least) helps some languages to thrive is marketing. Most of the most well-known languages of the new millennium (e.g. C#, Go, Swift, Rust) were developed or backed by major companies (Microsoft, Google, Apple and Mozilla respectively). The most successful language is not alway the one with the best specs, but with the best hype.

  34. The pro langog Hagi in 1960 not 1970 vere naice thanks in korckt

  35. At 25:07 you have the most concise example of an associative array. Period. Thank you Dr. Kernighan for this precious gem.

  36. a hacker for life! #master

  37. I would usually discard this kind of lecture as "pff, it's just that guy's opinion" but Given that the "Guy" in question is the "k" in K&R… I think I'll listen closely.

  38. I would have shouted perl to the first question, because I'd be done writing it and running it before most would get their boilerplate done. Though it would take 5 minutes to write it pretty and make it obvious how it worked for any later maintainer. But for problems like the one-shot mentioned above, a really simple text split and numeric compare on a field – and you're done. Maybe 3 lines if you were being super explicit.
    I love C and use it daily. C++ – well, some of it is good, but trying to use ever more complex constructs and bizarre template syntax (the template engine itself is Turing-complete without any cods at all, as Damian Conway demonstrated) is a dead end – adding complexity to cure complexity.

    But for a simple one-shot task? In most cases developer time costs more than computer time, and perl is better for those. Parsing that out in C vs using a language meant to do parsing is not a win.

    Ah, so (on viewing further) our hero says AWK – and that's a great choice, if you know AWK. I know perl, so, horses for courses. And make no mistake, this guy is a hero.

  39. Is it just me or does anyone else too feel his voice seem much younger than his face?

  40. Unix, C and AWK are parts of my life. I love you!

  41. is AMPL a more heavyweight MiniZinc?

  42. very good Talk, Can you Please any one help me with Presentation document which is sharing in talk, the dropbox down load link is broken , now working, Thanks in Advance

  43. To be a programmer, do you need to know more than one language?

  44. Very nice talk. Arguably, everything a programmer does is design languages/interpreters thereof. We write functions (interpreters) that take data[structures] (source code) and do something with it. Then you name all that stuff and there you have a (programming/declarative/configuration) language. Often, it is not worth or helpful to define your own parser taking textual source code to obtain your syntax tree, since there are now many standard ways to represent the kind of labeled graphs that all programs are textually or as nested datastructures in any language (html with extra attributes, nested associative arrays [json/yaml], S-expressions…). In fact, please don't define your own parser for key-value data like so many unix programs did in the past, just because it (seems) so simple.

  45. Five fifth decious include posion design.

  46. He kept playing with the coins in his pocket. lol

  47. Ocaml is multi-paradigm, not only functional

  48. "strongly-hyped" XD

  49. Brian, I really like all the videos you are in here on Y-Tube. Have you ever tried C#? I was avoiding it for several years. Looked like another toy language. Really not that bad and simple to use.

  50. PopQuiz: What is the dollar amount in precious change that Brian is playing with in his pocketses?

  51. basic and vbasic would be exceptionally quick

  52. After C, Golang is best thing happened to me. Thank you sir.

  53. I write a lot of python scripts in Linux, and 9/10 if I need to do some kind of data handling for files, I just have the python script run an awk command. Python is no slouch at file processing, but an awk one liner can usually do the trick 🙂

  54. Interesting he didn't mention Smalltalk on that list.

  55. I miss lectures.

  56. someone is hammering on the table the mic is on

  57. https://youtu.be/MhlILyfBNb4

  58. Good call on the swift thing

  59. what a legend

  60. Sapir-Whorf is Mostly Bunk…

  61. Camera has adhd

  62. did he get his train…. i really hate speakers that talk like a machine gun. they cant teach anyone anything

  63. I would say that today a language is more defined by the existing tools you can use with it than by the language itself.

  64. 18:30 Whorf had it so totally wrong. Human language is Turing complete. There is high formal language—what we slave over in school for 18 years—with a formal, propositional structure lurking somewhere important deep in the engine compartment under all the pollution-control plumbing—and here the expressive power is akin to mathematics. On the street, yo, language devolves into thousands of, like, pastiche DSLs.

    What Whorf should have said is that language facilitates culture, and culture facilitates language. So a language at any given point in time does have cognitive equity, and it is an instrumental prism of fad and fashion. But Whorf's creaking notion of IQ lacunae is just not on, and hasn't had any serious academic support for over thirty years now.

    C++ is a notation for abstracting a slow, painstaking evolution in the explicit mapping of computation onto time (instruction streams) and space (memory). C is often used instead, where the painstaking evolution is so painstaking that it becomes an institutional touchstone in its own right (e.g. the distributed Linux kernel collaboration). And now we have Rust crowding into the space above C++, where it's useful to exchange half of the explicitness in order to evolve the target system at twice the pace (Mozilla's Firefox project). Bug production rates in software are a fairly direct function of how explicitly one wishes to talk about the time domain.

    Abstract functional programming languages struggle to talk about the time domain at all, and have extremely low bug production rates as a direct result. Meanwhile, someone else's bug-riddled solution can handle problem sets twice the size, in half the time (this only suffices because bug hunting obeys the Pareto principle, in that 20% of your effort eliminates 80% of your most egregious leaks—a truly tragic reality if you can't sneak the odd reboot cycle into your deployed application to garbage collect your collective incompetence, but tolerable otherwise).

    Most of the people who hate C++ actually hate the problem it exists to solve. If you're truly wedded to a the kind of arduous application domain to which C++ is best suited, its arcane syntax is mostly just a paper cut by comparison to all the crap you have to contend with—crap that stems directly from the underlying system's brittle and specific evolution. Of course, mud wrestling alligators is the poster-boy antithesis of the ivory-tower supersuit league; and really—in a right thinking society—the niche ought to be banned altogether, along with other forms of concussive self-damage, like the NFL. I get it, it's a throwback. A throwback with legs. Legs the size of giant hams. Hams larded with cholesterol.

    What I'm trying to say here is that the word "notation" gets us into trouble from the outset. Notation biases us into thinking about language as conceptual and formal-descriptive. C++ encodes an edit space of finicky future specificities concerning the interplay of computational time/space (3D game engines from not that far back are a great example of this—one production gets a better bang for their critical-section buck having a list of this and a tree of that; the next productive gets a better bang for their buck from the reverse).

    This is painful. So you ask, why can't The One True Data Structure be all things, to all people, all of the time? And it can, so long as something else is doing the heavy lifting down in the basement where time and space are no longer treated as the abstract unification of water and sewage (ugly plumbing reality: the two shall meet, but never the twain shall burp; thank the gords for the grizzled neckbeard who left no stone unturned in devising the load/store fence of least action). This is why Python has made deep inroads into the machine learning space: there's professional plumbing underneath (and all around) that blazing matrix multiply.

    This is a fine talk, but I strongly suspect Kernighan is hauling Whorf's tired ass into focus at this juncture to enshrine logic, and eschew issues ecological.

  65. Thankfully the whiteboard notes will only ever be read by machines, so the fact that they're effectively flashcards shouldn't matter too much.

  66. well Lisp multi-paradigm programming language that started the idea of functional programming because it's first version was supposed to be a system to do symbolic manipulation of mathematics, hence it has symbolic expressions, and then after building it, they realized they had made a general purpose language, so they repurpose the "symbolic manipulation of math" into "symbolic manipulation of code" and this is how lisp macros begun. So if you are going to choose a "first paradigm" for Lisp is Symbolic Programming.

    SML is also imperative and ocaml is Functional-Imperative-OO

    My take on the history of languages and why there are multiple streams of development, is, it all begun with development in 3 places, Military, Academy, Business but all of this as individual institutions in each of them where silos, isolated and hired very clever people so you have APL, SNOBOL, COBOL, etc, ALGOL was meant to be some sort of comitte standard, and then CPL, etc but there were the rebels on the bunch, and it was you guys the Bell Labs team who decided to strip down a language to its bare minimum, use it to make a useful OS & share it, this was the boom. The combo of UNIX/C helped that colleges around the world had a real OS and not just Textbooks because every piece of software out of those silos was either super expensive (Smalltalk, Lisp, APL, etc) or lacking in marketing.

    And thanks to Berkeley they knew AT&T was going to caught up to the success and claim back "its" IP and we got BSD.

    So it was a combination of porto-free-software optimized for cheap computers that sparked the revolution & interest of kids into it, more than carefully designed for kids, hyperexpensive systems like Smalltalk. Today we got open source versions of this old languages and they are awesome, Pharo is great, Common Lisp implementations are incredibly fast, some are almost as fast as C++ (and keeping the language dynamic) if you remove the dynamic elements it could be as fast as C++ but they loose the ride that UNIX allowed and created by its specific modest features.

  67. Thuis guys knows stuff.. and shows how to age gracefully.

  68. It's a pity that he quoted Whorf.

  69. 'Strongly Hyped – Java' YES!!! YES YES YES!!! 😀 😀

  70. I was one of the people who had to do system programming in Pascal. Back in the 1980's I worked for NCR doing application programming on automated cash machines (ATMs). Even though it was application programming (business logic) it had many system aspects and ATM programming was not trivial. Intel did have a suitable language for their real iRMX OS (called PL/M I think), that was used to write device drivers and managers. But it was considered too hard for application developers and so Intel's Pascal/86 was mandated (I think this was a re-branded version of Pascal/MT).

    This was an extended version of Pascal designed to be used as a "real" language. It had support for include files, separate source code module, overlays, variable length arrays (only when passing parameters between different modules), assembly language support — but I never used that.

    It worked surprisingly well, but I would not want to do that again…

  71. At 28:50 on SWIFT, precisely right. Same with JS frameworks.

  72. This is one of the best presentation I've seen in a while. Simple explanation for complicated topic. I think I'm going to learn a new programming language… Thanks

  73. The man who introduced him taught me language design.

  74. Parallel to this is how the machines have evolved.

    Interesting how humans have become more and more distanced from what actually happens in the machine.
    Skynet soon !

  75. Simon Peyton Jones uses Comic Sans…
    Brian Kernighan uses Comic Sans…
    If you want to design a language, use Comic Sans.

  76. CS == Computer Science? No
    CS == C#? No.
    CS == Comic Sans? Yes!

  77. I used bash, sed and awk to handle my references for generation of bibtex for latex documentation when doing my thesis.

  78. I have immense respect and admiration for Brian Kernighan and I love his work, but….

    Comic Sans? Really?

Leave a Reply

Your email address will not be published. Required fields are marked *