150 Matching Annotations
  1. Dec 2016
    1. Input commands may appear in guards. A guarded command with an input guard is selected for execution only if and when the source named in the input com- mand is ready to execute the corresponding output com- mand.

      Input guards are ensuring that the receiver and sender are ready.

    1. for each node labeled by m, create one child for each marking of Post(m)

      What is the depth of this tree? I mean. One of the m' in post ... will have 'children' of it's own. When do we stop?

    2. Ingredients

      Places transitions and tokens make up a petri net. Petri net is not a programming language. It's a formalization. It allows for modelling concurrent programs , visually.

    1. Manynewcomers to Haskell are puzzled by the concept ofmonads

      What is it?

    2. monad

      No idea what a monad is.

    3. Strictness agsmaybeusedtoaddressmemoryleaks:structuresretainedbythegarbagecollector but no longer necessary for

      I thought Haskell didn't have garbage collection. - I guess it does.

    4. Data structures in Haskell are generallylazy:the comp onents are not evaluated until

      Non strict semantics.

    5. two other useful p olymorphic functions on lists that will b e used later.Functionheadreturns the rst element of a list, functiontailreturns all but the

      These also apply for all T. Universal polymorphism.

    6. Thelengthfunction is also anexample ofa p olymorphic

      Universal polymorphism applies 'for all T'

    7. typ esysteminfersthecorrecttyp esforus

      ML also has good type inference.

    8. typ e errors are detected at compile-

      Static typing = type errors evaluated at compile time.

    1. Lambdas are basically anonymous functions

      This is also allowed by Go. It can return anonymous functions as closures, HOWEVER it accepts parameters to the closure...

    2. Putting a space between two things is simply function application.

      Function application has left associativity.

    1. exhaustive patterns

      There should be a catch all pattern...

    2. Pattern matching consists of specifying patterns to which some data should conform and then checking to see if it does and deconstructing the data according to those patterns.

      Pattern matching is ONE WAY in Haskell. E.g. UNLIKE prolog where you can also do pattern matching on the output to deduce which inputs would produce it. (unification is two - way pattern matching).

    1. ength function has a type declaration of length :: [a] -> Int

      polymorphic

    2. Haskell is a statically typed language, it has to know all the types before the code is compiled

      I thought static typed means it know all the types AT compile time...

    1. ccess! Note that weeding out lists by predicates is also called filtering.

      we can apply a predicate in list comprehension.

    2. list comprehension

      List comprehension is a concept - not part of functional programming. Still important.

    3. You can also use ranges to make infinite lists by just not specifying an upper limit.

      infinite lists allow for dataflowo programms

    4. reverse reverses a list.

      how to implement? - use an accumulator and use the head and tail of the list. rev(x:xs, []) - accumulate in acc the head and do the same on the tail. 1:[] means [1]. [1]++[2,3] is [1,2,3]

    5. A common task is putting two lists together. This is done by using the ++ operator.

      We have to have both sides be lists. If want to concat ++ one element to a list, need to put it like [element] ++ [list] not just element ++ [list[

    6. if you see something like bar (bar 3), it doesn't mean that bar is called with bar and 3 as parameters. It means that we first call the function bar with 3 as the parameter to get some number and then we call bar again with that number

      parentheses don't mean function application in haskell

    7. we just separate the function name from the parameter with a space

      similar in syntax with function application.

    1. Haskell is statically typed. When you compile your program, the compiler knows which piece of code is a number, which is a string and so on.

      Types are know at compile time - probably reduces run time errors.

    2. and a function doubleMe which multiplies every element by 2 and then returns a new list. If we wanted to multiply our list by 8 in an imperative language and did doubleMe(doubleMe(doubleMe(xs))), it would probably pass through the list once and make a copy and then return it. Then it would pass through the list another two times and return the result. In a lazy language, calling doubleMe on a list without forcing it to show you the result ends up in the program sort of telling you "Yeah yeah, I'll do it later!". But once you want to see the result, the first doubleMe tells the se

      Haskell does lazy evaluation. - Allows infinite lists as a consequence.

    3. So in purely functional languages, a function has no side-effects. The only thing a function can do is calculate something and return it as a result.

      No side effects. Referential transparency - no assignment done by function?

  2. homepage.divms.uiowa.edu homepage.divms.uiowa.edu
    1. 1.Call by name is the same as normal order reduction except that no redexin a lambda expression that lies within an abstraction (within a functionbody) is reduced. With call by name, an actual parameter is passed as anunevaluated expression that is evaluated in the body of the function be-ing executed each time the corresponding formal parameter is referenced.Normal order is ensured by choosing the leftmost redex, which will al-ways be an outermost one with an unevaluated operand.2.Call by value is the same as applicative order reduction except that noredex in a lambda expression that lies within an abstraction is reduced.This restriction corresponds to the principle that the body of a function isnot evaluated until the function is called (in a β-reduction). Applicativeorder means the argument to a function is evaluated before the functionis applied.

      Haskell performs call by name, but once an evaluation has occurred - the result is 'remembered' so it won't have to be re-evaluated.

    2. The α-reduction rule simply allows the changing of bound variables as longas there is no capture of a free variable occurrence.

      alpha reduction. - change var name to avoid capture when substitution happens.

    3. LAMBDA REDUCTIONSimplifying or evaluating a lambda expression involves reducing the expres-sion until no more reduction rules apply. The main rule for simplifying alambda expression, called β-reduction

      Simplifying or evaluation of a lambda expresion is reduction.

    4. (λx . (mul y x))[y→x] to get (λx . (mul x x))is unsafe since the result represents a squaring operation whereas the origi-nal lambda expression does not.

      example of unsafe substitution

    5. The notation E[v→E1] refers to the lambda expression obtained by replacingeach free occurrence of the variable v in E by the lambda expression E1.Such a substitution is called valid or safe if no free variable in E1 becomesbound as a result of the substitution E[v→E1]

      When substituting, must be careful.

    6. Evaluatinga lambda expression is called reduction.

      There are alpha and beta reductions. Beta reductions are substitutions, and alpha reductions are changing variable names so that other vars aren't captured.

    7. higher-order functions—that is, functions thattake functions as arguments and/or return a function as their result.

      definition of higher order functions.

    8. (D → D) → D → D

      Patentheses needed given right associativity of ->

    9. of associating → to the right

      -> Associates to the right. E..g N->N->N is N->(N->N) And if you need it the other way, you need parentheses.

    10. curried version of the function with the property that argumentsare supplied one at a time

      Curry is a form of function definition, not function application.

    11. application has a higher precedence than abstraction

      Application has a higher precedence than abstraction. (E.g . application before scope?)

    12. Function application associates to the left.E1 E2 E3 means ((E1 E2) E3)

      Function application associates to the left.

    13. thenumber of parentheses in lambda expressions can become quite large

      there is a convention , which enables parenthesis numbers to be less.

    14. n an abstraction, the variable named is referred to as the bound variableand the associated lambda expression is the body of the abstraction

      \x -> x+x x after \ is bound and the body is x+x

    15. 140 CHAPTER 5 THE LAMBDA CALCULUS5.1 CONCEPTS AND EXAMPLESOur description of the lambda calculus begins with some motivation for thenotation. A function is a mapping from the elements of a domain set to theelements of a codomain set given by a rule—for example,cube : Integer → Integer where cube(n) = n3.Certain questions arise from such a function definition:• What is the value of the identifier “cube”?• How can we represent the object bound to “cube”?• Can this function be defined without giving it a name?Church’s lambda notation allows the definition of an anonymous function,that is, a function without a name:λn . n3 defines the function that maps each n in the domain to n3.We say that the expression represented by λn . n3 is the value bound to theidentifier “cube”. The number and order of the parameters to the functionare specified between the λ symbol and an expression. For instance, theexpression n2+m is ambiguous as the definition of a function rule:(3,4) |→ 32+4 = 13 or (3,4) |→ 42+3 = 19.Lambda notation resolves the ambiguity by specifying the order of the pa-rameters:λn . λm . n2+m orλm . λn . n2+m.Most functional programming languages allow anonymous functions as val-ues;

      Haskell allows lambda expression (which are anonymous functions) with the '\x -> ' syntax. Go also allows anonymous functions, and provides some support for closure.

    16. The number and order of the parameters to the functionare specified between the λ symbol and an expression

      The lambda expression here doesn't have a name, necessarily, it just indicates the number and order of the parameters.

    17. higher-order functions of the lambda cal-culus.

      Higher order functions can be expressed in lambda calculus. Functional programming languages include higher order functions.

    18. AlonzoChurch developed the lambda calculus in the 1930s

      Creator of lambda calculus.

    1. While static type checkers/inferencers o er a great help toprogrammers to catch (trivial) errors at early stages in theirprograms and to programming language implementers to generatemore ecient code (by assuming the untyped memory model), theyhave an inherent drawback: theydisallow correct programs whichdo not type check.

      Some programs which are not type check=able are not allowed.

    2. et-polymorphism.The idea is to type theletlanguage constructs di erently.

      This polymorphis allows to apply functions to multiple types, Here they use the length of a list as example, since you don't need to consider the type of elements if you want the length.

    3. ce the identityfunction types to a polymorphic type,t(0) -> t(0)

      Identity function can take any type.

    4. Let Polymorphism

      Was this polymorphism named in class? It is probably static since it is evaluated in the compile-time and not run time.

    5. most general uni er

      mGU is the unification/substitutions that allows us to type things more generally?

    6. The technique that we used to solve the type equations is calleduni cation.

      Unification is used in prolog to produce results based on inference. - Is this the same? Some symbolic substitution?

    7. If anycon ictis detected while solving thesystem then we say that the process of typingEfailed,

      We tried to solve the constraints and found a contradiction. - E.g. typing fails.

    8. type inferenceand consists of two steps:1.Collect information under the form of type parametricconstraints by recursively analyzing the expression;2.Solve those constraints.

      Type inference consists of collecting info. of type parametric constraints. AND Solving those constraints.

    9. Polymorphic Functions

      There are three different kinds of polymorphism.

    10. f x then y else z + xis seen then one knows that there is a typing error because the typeofxcannotbe bothintegerandbool

      Example of a typing error.

    11. pe check programs without the need forthe user to provide any auxiliary type information

      The ideal case is that all types can be inferred.

    12. nfer the intended types from how the names are used.

      Is all type inference context based?

  3. Nov 2016
    1. What about some popular languages? Common Lisp: strong, dynamic, implicit Ada: strong, static, manifest Pascal: strong, almost static, manifest Java: strong, some static and some dynamic, manifest ML: strong, static, implicit JavaScript and Perl: weak, dynamic Ruby and Python: strong, dynamic

      TYPING CHARACTERISTICS OF SOME LANG'S

      LISP: Strongly type, dynamically typed, implicit FOR EXMP.

    2. ML

      ML has type inference. Haskell has good type inf too, no need to specify type explicitly.

    3. Type Conversion Explicit operation that takes in an object of one type and returns an object of a different type that has the "same value" (not necessarily the same bit pattern). Type Coercion Implicit Non-converting Type Cast "Reinterpret" an object as an object of another type by preserving its bit pattern, regardless of value.

      Important Definitions Type Conversion -> Operation that takes OBJ of one type and returns OBJ of Type2 that has the 'same value' Type Coercion -> Implicit . Force it to be compatible to other Type. Non Converting Type Cast -> Reinterpret the object without converting it to the other type.

    4. ML seems to use structural equivalence:

      Structural equivalence example. Point and pair are equivalent.

      • -> this means tuple ? he directly assigns a previously created point of type int*int to a pair, which is structurally equivalent.
    5. Type Equivalence When are two types the same?

      There are two kinds of equivalence and some languages implement only one of the two. NAME EQUIVALENCE. (even if all types equal, if names different, not equivalent|) STRUCTURAL EQUIVALENCE. (names different but types equal)

    6. A set of basic types A mechanism for defining new types Rules for type equivalence Rules for type compatibility Rules for type inference Rules for handling type clashes (a.k.a type checking)

      Type systems must be able to resolve issues .

    7. A value's type constrains the way it may be used in a program.

      ONly certain actions apply to a type. A type is composed of The operations applicable to it as well as the values.

  4. connex.csc.uvic.ca connex.csc.uvic.ca
    1. Suppose that each variant of the recordSTypeis to be processed by its own subprogram. Acase-statement must be used todispatch

      This seems to go hand in hand with variants. If a different alternative inside the variant, then you can have a different thing / subprogram execute for that CASE.

    2. The assumption underlying variant records is that exactly one of the fields of the union is mean-ingful at any one time

      Just one of the Fields of the variant 'exists' at one time. It is the one chosen for that 'case'.

    3. the fielddatais allocated enough memory tocontain the larger of the array fieldaor the record fieldr

      The largest alternative is thae one allocated, from before.

    4. typedef int Arr[10];typedef structf oat f1;int i1;gRec

      The two alternatives for the Variant.

    5. aunion typein C can be used to create a variant record which can itself be embedded into astructure that includes the common tag field

      Union type in C allows to create variant record.

    6. To solve these types of problems, programming languages introduce a new category of types calledvariant recordswhich havealternativelists of fields. Thus a variable may initially contain a valueof one variant and later be assigned a value of another variant with a completely different set offields. In addition to the alternative fields, there may be fields which are common to all recordsof this type;

      these Variant records have sets of fields for different 'cases' and also default fields that are common to all alternatives.

    7. Variant records are used when it is necessary to interpret a value in several different ways at run-time. Common examples are:

      Type of polymorphism (dynamic? or static?) This would be determined at run time - so dynamic.

    8. The first parameter,Item, is declared with the notation(<>)

      If you see the declaration of sort, it has an extra line using this <> symbol

    9. he generic parameters arecompile-timeparameters and are used by the compiler to generate thecorrect code for the instance.

      This tells us that Generics is a static form of polymorphism. NOT dynamic.

    10. To get a (callable) procedure, you mustinstantiatethe generic declaration, that is, create an in-stance by furnishing generic actual parameters

      The generic declaration is not declaring a procedure, must instantiate the generic declaration by passing actual parameters to the generic template.

    11. genericsthat allows the programmer to define a templateof a subprogram and then to create instances of the template for several types

      Reduces amount of code to write.

    12. if we are working only with homogeneous data structures such as an array ofintegers, or a list of floating-point numbers, it is sufficient to use static polymorphism to createinstances of a program template at compile time

      Generics - type of polymorphism (what kind? static or dynamic. I think static because resolved at compiled time) create different instances of a procedure from a template

    13. the same name can be used for two or more different subprograms providedthat theparameter signaturesare different.

      Aha! Thta's what I said.

    14. Overloadingis the use of the same name to denote different objects in a single scope. T

      I had seen overloading of functions, where functions have the same name but different signatures.

    15. Polymorphism can be either static or dynamic. In static polymorphism, the multiple forms are

      Polymorphism can be static - resolved at coompile time or dynamic - not resolved until runtime

    16. Somehow the program must be able to identify thestatic chain, the link of activation records thatdefines the static context of the procedure according to the rules of scope, as opposed to the dy-namic chain of procedure calls at execution time. As an extreme example, consider a recursiveprocedure: there may be dozens of activation records in the dynamic chain (one for each recur-sive call), but the static chain will consist only of the current record and the record for the mainprocedure.

      Static chain seems to depend more on the context based on rules of scope and NOT , like the dynamic chain, on which procedures were called before (e.g. the chain of recursive procedures).

    17. Somehow the program must be able to identify thestatic chain, the link of activation records thatdefines the static context of the procedure according to the rules of scope, as opposed to the dy-namic chain of procedure calls at execution time. As an extreme example, consider a recursiveprocedure: there may be dozens of activation records in the dynamic chain (one for each recur-sive call), but the static chain will consist only of the current record and the record for the mainprocedure

      Important concept, Static Chain. Link of Activation Records.

    18. If deeper nesting is used, each activation record contains a pointer to the previous one. Thesepointers to activation records form thedynamic chain(Figure 7.9). To access ashallow variable(one that is less deeply nested), instructions have to be executed to “climb” the dynamic chain.This potential inefficiency is the reason that accessing intermediate variables in deeply nestedprocedures is discouraged. Accessing the immediately previous level requires just one indirectionand an occasional deep access should not cause any trouble, but a loop statement should notcontain statements that reach far back in the chain.

      Dynamic chain is an important concept. It points to the previous procedure of the stack?

    19. If the only recursive call in a procedureis the last statement in the procedure, it is possible to automatically translate the recursion intoiteration.

      Definition of Tail Recursion.

    20. ail-recursion

      I believe haskell has tail recursion?

    21. However, the stack may hold other data besides that associated with a procedure call (such astemporary variables, see Section 4.7), so it is customary to maintain an additional index calledthebottom pointerwhich points to the start of the activation record (see Section 7.7). Even if thetop-of-stack index varies during the execution of the procedure, all the data in the activation recordcan be accessed at fixed offsets from the bottom pointer.

      Top of stack might not necesarily point at where activation record begins. so we make a bottom pointer to the end of A. record and then we offset from there to find our vars.

    22. Upon completion of the procedure the above steps are reversed:1. The top-of-stack index is decremented by the amount of memory allocated for the localvariables.2. The return address is popped and the instruction pointer reset.3. The top-of-stack index is decremented by the amount of memory allocated for the actualparameters

      After completion -> Decrement 'top' by amount of mem for local vars. Pop the Return. Adress. and reset instruction pointer. Decrement Pop by amt mem alloc for Actual paramenter.

    23. Activation recordsThe stack is actually used to support the entire procedure call and not just the allocation of localvariables. The segment of the stack associated with each procedure is called theactivation recordfor the procedure. In outline,10, a procedure call is implemented as follows (see Figure 7.8):1. The actual parameters are pushed onto the stack. They can be accessed as offsets from thestart of the activation record.2. Thereturn addressis pushed onto the stack. The return address is the address of the state-ment following the procedure call.3. The top-of-stack index is incremented by the total amount of memory required to hold thelocal variables.4. A jump is made to the procedure code.

      What is an activation Record, and what does it contain? An activation record is used to support a procedure call.<br> It contains actual parameters for that procedure call. It contains the return address for when thie procedure is done.

      What happens when the activation record is created? The 'top' index is incremented by the amount of memory required to store the activation record, and a jump is made to the procedure code.

    24. How is a stack used in the implementation of a programming language? A stack is used to storeinformation related to a procedure call, including the local variables and parameters that are auto-matically allocated upon entry to the procedure and released upon exit.

      STACK allocated memory for procedure calls. And releases the memory upon exit from the call. - There is a 'call chain' ... and a 'dynamic chain;' ?

    25. stack contains an additional piece of data—thetop-of-stackpointer. This is an index to the first available empty position in a stack

      Stack definition.

    26. Astackis a data structure that stores and retrieves data in a Last-In, First-Out (LIFO) order.

      Definition of Stack.

    27. The second requirement arises from consideration of the lifetime of the local variables. In theexample, the lifetime of the formal parameternis from the moment that the procedure is calleduntil it is completed. But before the procedure is completed, another call is made and that callrequires that memory be allocated for the new formal parameter. To computefactorial(4), a mem-ory location is allocated for4, then3and so on, five locations altogether. The memory cannot beallocated before execution, because the amount depends on the run-time parameter to the function.Section 7.6 shows how this allocation requirement is directly supported by the stack architecture.

      Memory must be able to be dynamically allocated for the other calls.

    28. During run-time, it must be possible to allocate an arbitrary number of memory cells for theparameters and local variables.

      Stack frame? Some sort of structure to allocate all teh parameters and local variables for the new recursive calls. 'Popped' from the stack when done.

    29. declaration has associated with it three properties:7ScopeThe scope of a variable is the segment of the program within which it is defined.VisibilityA variable is visible within some subsegment of its scope if it can be directly accessedby name.LifetimeThe lifetime of a variable is the interval during the program’s execution when memoryis assigned to the variable.Note that lifetime is a dynamic property of the run-time behavior of a program, while scope andvisibility relate solely to the static program text.

      Definitions for the declaration of a variable in Ada.

    30. Block structure was first defined in the Algol language which includes both procedures and un-named blocks.

      Algol has block structure and unnamed blocks.

    31. eferentially transparent

      Referential transparency means that there is restriction of functions to accessing global varialbes? E.g. that's what functions aren't allowed to do.

    32. The real disadvantage of call-by-reference is that the mechanism is inherently unsafe. Supposethat for some reason the subprogram thinks that the actual parameter is an array whereas in factit is just a single integer. This can cause an arbitrary amount of memory to be smeared, sincethe subprogram is working on the actual parameter, and not just on a local copy.

      Problem with call by reference - unsafe. Can cause problems because it directly works on the actual parameter.

    33. The solution, which is known ascall-by-referenceorreference semantics,is to pass the address of the actual parameter and to access the parameter indirectly

      Call by reference is different than copy out semantics. In copy out, at the end you copy the result into the variable pointed to by the address, but in call by reference, you pass the address of the actual parameter and access the parameter indirectly.

    34. copy-out semantics. The actual parameter mustbe a variable, and the subprogram is passed the address of the actual parameter which it saves

      Call by name / address

    35. we will oftenwant to modify the actual parameter despite the fact that such modification is “unsafe”:

      Call by name is another mechanism.

    36. Since only a copy of the actual pa-rameter is passed, the subprogram cannot cause any damage to the actual parameter

      No damage done to the actual parameter, that belongs to the calling program.

    37. The formal parameter is just a variable declared within the subprogram, so theobvious mechanism is to copy the value of the actual parameter into the memory location allocatedto the formal parameter. This mechanism is calledcopy-in semanticsorcall-by-value.

      Call by value means to copy the value of the actual parameter into the memory allocated to the formal parameter.

    38. parameters are values, they can be constants or expressions

      parameters can be expressions.

    39. Aformal parameteris a declaration that appears in the declaration of the subprogram. Thecomputation in the body of the subprogram is written in terms of formal parameters.Anactual parameteris a value that the calling program sends to the subprogram.

      formal parameter is in the declaration actual parameter is value the subprogram is called with from the Calling program

    40. Notethat C does implicit type conversions in many cases, while in Ada the result type must exactlymatch the context.

      Function return values must match the context in Ada - no automatic type conversion.

    41. Semanticsis the meaning of an expression (program) in a (programming) language.While the syntax of languages is very well understood, semantics is much more difficult.

      Semantics - meaning of program

    42. arious other symbols are used to make the rules more concise:[ ] OptionalfgZero or more repetitionsjO

      Symbols that show how many repetitions are included for another symbol.

    43. In EBNF,we start with a highest level entityprogramand use rules to decompose entities until single char-acters are reached

      Extended Backus Naur Form. Start with a high level entity and decompose into smaller and smaller units.

    44. Thesyntaxof a (programming) language is a set of rules that define what sequencesof symbols are considered to be valid expressions (programs) in the language

      Syntax - set of rules that define what sequences of symbols are considered to be valid expressions in the language.

    45. When the slow memory is disk and the fast memory is ordinary RAM (Random Ac-cess Memory), the concept is calledvirtual memoryorpaged memory

      Virtual memory - Slow memory is Diskd Fast memory is RAM.

    46. Memory access:Loadthe contents of a memory word into a register, andStorethe contentsof a register into a memory word.Arithmetic instructions such as add and subtract. These operations are performed on thecontents of two registers (or sometimes between the content of a register and the content ofa memory word). The result is left in a register. For example:add R1,Nadds the contents of the memory wordNto the contents of the registerR1and leaves theresult in the register.Compare and jump. The CPU can compare two values such as the contents of two registers;depending on the result (equal, greater than, etc.), the instruction pointer is changed to pointto another instruction. For example:jumpeq R1,L1. . .L1: . . .causes the computation to continue with the instruction labeledL1if the contents ofR1arezero; otherwise, the computation continues with the next instruction

      Separation of possible commands Memory, Arithmetic, Compare and Jump

    47. The CPU contains a set ofregisterswhich are special memory locations in which computation isbe done. The CPU can execute any one of a set ofinstructionswhich are stored in memory; theCPU maintains aninstruction pointerwhich points to the location of the next instruction to beexecuted

      Important Computer Architecture concepts.

    48. computer architecture so that a minimal set of terms can be agreed upon. A computeris composed of acentral processing unit(CPU) andmemory(Figure 1.1). Input/output devicesCPUMemoryI/OBus--Figure 1.1: Computer architecturecan be considered to be a special case of memory.

      Parts of a computer

    49. We will discuss two non-imperative language formalisms: (1) functional programming (Chap-ter 16), which is based on the mathematical concept of pure functions, likesinandlogthat do notmodify their environments, unlike so-called functions in an ordinary language like C which canhave side-effects; (2) logic programming

      two paradigms

    50. Another approach to abstract programming is to describe a computation using equations, func-tions, logical implications, or some similar formalism.

      functional programming or predicate logic.

    51. Java7is both a programming language and a model for developingsoftware for networks. The model is discussed in Section 3.11. The language is similar to C++in syntax, but its semantics is quite different, because it is a “pure” OO language like Eiffel andrequires strong type checking.

      Java requires strong type checking.

    52. There is a technical aspect of these pioneering object-oriented languages that prevented wideracceptance of OOP: allocation, operation dispatching and type checking are dynamic (run-time)as opposed to static (compile-time).

      OOP languages had dynamic allocation and type checking.

    53. he first object-oriented programming language, Simula, was created in the 1960’s by K. Ny-gaard and O.-J. Dahl for system simulation: each subsystem taking part in the simulation wasprogrammed as an object.

      simula oop

    54. The three elementary operations of Lisp are:car(L)andcdr(L)5which extract the head and tail ofa listL, respectively, andcons(E, L)which creates a new list from an elementEand an existinglistL

      elementary operations

    55. LispLisp’s basic data structure is the linked list.

      Data based programming.

    56. e C and Pascal) leave to the operating system,

      C and Pascal leave concurrency to the operating system, but Ada tries to handle it itself.

    57. Ada extends the scope of programming languages by supporting error handling and concur-rent programming which are traditionally left to (non-standard) operating system functions

      Ada - Concurrent programming supported.

    58. C++lan-guage, extending C to include support for object-oriented programming similar to that provided bythe Simula language.

      Simula and C++ Are object oriented P.Languages.

    59. t extremely easy to write programs with obscure bugs because unsafe constructsare not checked by the compiler as they would be in Pascal.

      C compiler doesn't check the structures...

    60. C was developed by Dennis Ritchie of Bell Laboratories in the early 1970’s as an implementa-tion language for the UNIX operating system.

      C language abstracts machine language concerns away from the programmer. - No registers, etc. However - it allows for the programmers to go in with more detail into memory, with pointers and bit level operations.

    61. Modula (now in version 3 which supports object-oriented pro-gramming) is a popular alternative to non-standard Pascal dialects

      By Wirth (As Pascal) but allows larger programs because Pascal didn't allow for this. Programs couldn't be divided into modules on separate file on Pascal.

    62. ut perhaps the most famousdescendent of Algol is Pascal, developed in the late 1960’s by Niklaus Wirth. The motivation forPascal was to create a language that could be used to demonstrate ideas about type declarationsand type checking. In later chapters, we will argue that these ideas are among the most significantconcepts ever proposed in language design

      Algol influences Pascal, which implements type checking and type declarations.

    63. Simula, one of the first simulation languages

      Simula influenced by Algol

    64. Algol has influenced language design more than any other.

      Parent of C, C++, Java

    65. e abstraction quickly won over most programmers: quicker andmore reliable development, and less machine dependence since register and machine instructionsare abstracted away. Because most early computing was on scie

      Fortan's abstraction is an advantage because it allows programs to not have to specify minute details as registers. The compiler produces the program.

    66. Fortran was the first programming language that significantly rose above the level of assemblylanguage. It was developed in the 1950’s by an IBM team led by John Backus,

      Fortran <- John Backus

    67. rolog, you lose the ability you had in C to specify arbitrary linkedstructures using pointers.

      Prolog implements its own 'search' - so programmers can't specify themselves how it will be performed.

    68. The higher the abstraction, the more detail is lost.

      Interesting.

    69. why there are hundreds of programming languages: two different classesof problems may demand different levels of abstraction, and different programmers have differentideas on how abstraction should be done.

      I guess that's why a large number of different compilers are needed.

    70. An assembler takes a program written inassemblylanguage, which represents each instruction as a symbol, and translates the symbols into a binaryrepresentation suitable for execution on the computer

      Improvement over the tedious past in which mechanical levers had to be changed per program.

    71. rogramsand languages can be defined as purely formal mathematical objects.

      The computer needs a way to interpret the mathematical symbols that are inherent in a programming language.

    1. Aninterpreteris another way of implementing a programming language. Interpre-tation shares many aspects with compiling. Lexing, parsing and type-checking arein an interpreter done just as in a compiler.

      The 'front end' is the same with an interpreter as with a compiler, (Lexical analysis, Syntactical Analysis, and Type checking) but the syntax tree generated in Interpreter is used directly to evaluate expressions.

    2. The phases of a compiler

      Lexical analysis Syntax Analysis / parsing Type Checking Intermediate Code generation Register Allocation Machine Code Generation Assembly and linking

    3. In order to reduce the complexity of designing and building computers, nearly allof these are made to execute relatively simple commands (but do so very quickly).A program for a computer must be built by combining these very simple commandsinto a program in what is calledmachine language. Since this is a tedious and error-prone process most programming is, instead, done using a high-levelprogramminglanguage. This language can be very different from the machine language that thecomputer can execute, so some means of bridging the gap is required. This is wherethecompilercomes in.

      Compiler transforms high level language into machine language/ machine code.

  5. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. Type 2 grammars correspond tothe BNF grammars

      BNF is a context free grammar. ONly one element on the left hand side. 0=-> NONTERMINAL

    2. These rules are called context-sensitive be-cause the replacement of a nonterminal by its definition dependson the surrounding symbols

      Type one grammars are restricted because there must be an equal number of symbols on the LHS and RHS. Also, this is called context-sentitive grammar, because substituting a symbol by its definition depends on surrounding symbols.

    3. Definition: A grammar < Σ,N,P,S> consists of four parts:

      Sigma - terminal sumbols N - non terminal symbols P - productions/rules S start symbol

    1. Java may wind up as the instructional language of tomorrow as it is truly object-oriented and implements advanced techniques such as true portability of code and garbage collection.

      Java is object oriented and implmeents garbage collection.

    2. Unix gives C such advanced features as dynamic variables, multitasking, interrupt handling, forking, and strong, low-level, input-output.

      C has dynamic variables, multitasking, interrupt handling. Forking and Strong, input ouput.

    3. ascal did not implement dynamic arrays, or groups of variables

      Problem

    4. Pascal also helped the development of dynamic variables, which could be created while a program was being run, through the NEW and DISPOSE commands.

      Dynamic variables implemented by Pascal.

    5. Algol implemented some novel concepts, such as recursive calling of functions,

      Algol implements recursion.

    6. The Algol language was created by a committee for scientific use in 1958. It’s major contribution is being the root of the tree that has led to such languages as Pascal, C, C++, and Java.

      Algol is parent of Pascal C C++ and Java. Uses Backus-Naur Form.

    7. LISP) language. It was designed for Artificial Intelligence (AI) research. Because it was designed for a specialized field, the original release of LISP had a unique syntax: essentially none.

      The syntax was mentioned in class to be tedious. Lots of parentheses.

    8. A compiler is a program that turns the language’s statements into 0’s and 1’s for the computer to understand

      Compilers have gotten way more complex nowadays but basically this is what they do.

    9. “conditional control transfer” (

      John Von Neumman suggests structures that will make code able to adjust to situations and be non-linear/ not just a sequence of chronological steps.