A paradigm is a pattern or stylistic approach to doing or stating something. For example, when you prepare your resumé, you use a standard pattern, a paradigm, for formatting it. Another paradigm is the layout of the engine in a car --- although there are many different models of car, virtually all of them use the same design for their internal-combustion engine, frame, wheels, etc. Paradigms are useful to architects, who design and build skyscrapers with one paradigm and design wood-frame, family residences with another. Writers use paradigms for short stories, novels, and newspaper stories. Paradigms are used by scientists and engineers, too.
Paradigms are useful to computer hardware --- there are standard chip layouts, network layouts, etc. And there are paradigms for computer software:
In CIS501, you drew blueprints --- class diagrams and object diagrams --- that showed the architecture of systems you built: card games, graphical toys, database systems.
There are several standard architectures for software systems:
The architecture also appears in single-user systems where there is a clear division ("tiers") between data structures, controllers, and user interfaces.
This same idea is used for language processing, where custom languages are implemented as interpreters written in existing languages. (Example: an implementation of Ruby as a C-coded interpreter or an implementation of a gaming language on top of Ruby.) Any application software that requires an input language of words (and not just point-and-click) uses a virtual machine to read the language. You will learn to build virtual machines in this course.
Hardware has paradigms; software has paradigms. Software is written in programming languages, and programming languages use paradigms. This course covers three standard paradigms:
Do we do everyday tasks this way? Really, no. This programming paradigm comes from 1950's computer hardware --- it's all about reading and resetting hardware registers. It's no accident that the most popular imperative language, C, is a language for systems software.
Now, object languages like Java and C# rely on variables and assignments, but the languages try to "divide up" computer memory so that variables are "owned" by objects, and each object is a kind of "memory region" with its own "assignments", called methods. This is a half-step in the direction of making us forget about 1950's computers.
Declarative programming dispenses with memory and assignments. All data is message-like or parameter-like, copied and passed from one component to the next. Since there are no assignments, command sequencing ("do this line first, do this line second, etc.") becomes unimportant and can even be discarded. The result is a kind of "programming algebra":
Think back to algebra class, where you wrote a set of simultaneous equations to define an answer to a problem --- the equations were definitions and their ordering didn't matter. Algebra is a declarative programming language. If you work with physicists, mathematicians, and chemists, you find that they think in terms of equations, and they define solutions to problems in equations.
The declarative paradigm works this way: you program an answer to a problem as a kind-of equation set that uses complex input parameters ("messages") to calculate outputs.
Since there is no memory, data values (parameters) must be primitives (ints, strings) and also data structures (sequences, tables, trees). Components pass these complex parameter data structures to each other. There are no race conditions because there are no global variables --- all information is a parameter or a returned answer. This paradigm applies both to a program ("function") that lives on one machine and also to a distributed system of programs --- parameters replace memory.
The structure of the puzzle and the clues are literally a program, because they are constraints for which the computer searches for a solution. In the logical/constraint programming paradigm, a program is a set of constraints, written as logical assertions, and a computation finds values for the variables mentioned in the logical assertions so that all the assertions are made true.
The cells of the puzzle are called logical variables. The logical assertions are the clues to the puzzle. The computation tries various combinations of values to bind to the logical variables, trying to make true the logical assertions.
For example, 2 + 1 = X is a program (constraint) whose output is X = 3, and X = X * X is a program whose output can be either X = 0 or X = 1.
The logical-programming paradigm is useful for solving problems ("queries") in data bases, knowledge discovery, and learning: a query to the database is a logical assertion with variables that must receive values to answer the query. (Think about the SQL programs you have written.) The paradigm was invented by computer scientists who studied resolution-based theorem proving; they noticed that a side-effect of constructing logic proofs that contained existential quantifiers, ∃X P(X), was that answers, a, for X were computed that made P(a) hold true. From here, it was a small step to adapting logic as a language for specifying computation of answers for such X.
This paradigm uses a shared, global memory, but once a correct answer is inserted into memory, it cannot be changed by destructive assignment. This eliminates race conditions and allows massive parallelism. (Think about how two people can work at the same time on the same crossword puzzle.)
To understand syntax, we will learn a notation for stating syntactic structure: grammar notation. This comes in the next chapter.
To understand semantics, we will learn about semantic domains of expressible, denotable, and storable values. We will also learn extension principles that enrich the semantic domains. Finally, we will see how languages like Java, ML, and Prolog grow from ``core'' grammars and domains by means of the extension principles.
To understand pragmatics, we will learn the standard virtual machines for computing programs in a language. The virtual machines might use variable cells, or objects, or algebra equations, or even logic laws. In any case, the machines compute execution traces of programs and show how the language is useful.
You will reach the point where it will be useful to share your knowledge, your libraries, your styles with others. At this point, you might wish to develop a language that is specialized to the domain. Such a language is called a domain-specific language. You use the language to talk about programs and their solutions. If you can state solutions (algorithms) in your domain-specific language, and if a computer can understand your domain-specific language (that is, you write an interpreter --- virtual machine --- for your domain-specific language), then it is a domain-specific programming language.
Most of us will never design a language like C or Java, but many of us have designed or will design domain-specific languages to solve a narrow class of problems in a specialty domain. If you want your language to be good quality, you must learn the concepts in this course about syntax, semantics, and pragmatics --- you will rely on all three to design a useful domain-specific language.
You have probably hacked code in HTML --- it's a domain-specific language for web-page layout. You might have used a gamer package to create an interactive game; you have thus used a domain-specific language for gaming. Excell is a clever, text-plus-graphics domain-specific language for spreadsheet construction. make is a hugely useful little language for importing and linking C-files into one huge C-program. And so it goes. Indeed, any nontrivial, grammatical input format for an application is a domain-specific language, with its own syntax, semantics, and pragmatics --- language design and systems design go hand in hand.