Short Assignment 7 is due Monday.
Last time, we started to look at linked lists. We saw the concept, and we saw the CS10LinkedList
interface in CS10LinkedList.java. Like an ArrayList
, this interface uses a generic type T
.
By calling methods in the CS10LinkedList
interface, you can traverse a linked list to get a reference to each element of the list, as ListTraverse.java shows. Here, each element is an Integer
object, just to keep things simple. (We rely on unboxing in the loop body, where we add i
, which is really a reference to an Integer
object, into sum
.) Notice the header of the for-loop to traverse the list:
for (Integer i = myList.getFirst(); i != null; i = myList.next()) {
System.out.println("List element with value " + i);
sum += i;
}
This style of for-loop header might seem strange to you, becuase it doesn't use the increment or decrement operators of Java to go through the linked list, as you would for an array or ArrayList
. Since we don't index into a linked list, there's no need to maintain an index and hence no need to increment.
Let's dissect the first for-loop header. The initialization sets i
to be the value of the first element in the list and sets current
to reference the first list element. It's the equivalent of getting the 0th element in an array. The test checks whether we've hit the end of the linked list, the equivalent of the index into an array reaching the size of the array. The update part advances current
by one position in the linked list, setting i
to the value of the next element in the list.
We don't have to rely on getFirst
and next
returning null
when they hit the end of the list. The second for-loop calls the hasCurrent
method to determine when that happens, but it has to explicitly call the get
method within the body of the loop to get the value of the current element:
for (myList.getFirst(); myList.hasCurrent(); myList.next()) {
Integer i = myList.get();
System.out.println("List element with value " + i);
sum += i;
}
If you took CS 1, then you know that the simplest, cleanest way to implement a linked list is with a circular, doubly linked list with a sentinel. The implementation is in SentinelDLL.java. The class is generic for a type T
, declared with the line
public class SentinelDLL<T> implements CS10LinkedList<T>
To start, each list element is an object of the class Element
and has three instance variables:
data
is a reference to the object being stored in that list element. This object must be of the type T
. For the above example with state names, when we create a SentinelDLL
object, T
will be a String
, so that data
is a reference to a String
.next
is a reference to the Element
after this one in the list.previous
is a reference to the Element
before this one in the list.The Element
class is a private inner class. It has the following methods:
T
. It stores this reference in the instance variable data
.toString
returns the String
representation of this element's data object.Because each Element
stores a reference to an object, strange things can happen if we store a reference to an object and then the object is changed. Therefore, we require that once a reference to an object is stored in an Element
, the object itself should not change.
The class SentinelDLL
implements the linked list. In fact, it implements the CS10LinkedList
interface. The methods of SentinelDLL
will need to access the data
, next
, and previous
instance variables of each Element
object. Because Element
is a private inner class, the methods of SentinelDLL
can access its instance variables, even though they are declared as private. No methods outside of SentinelDLL
can access the instance variables of Element
, and so no methods outside of SentinelDLL
can refer to data
, next
, or previous
.
Next we examine the declaration for the class SentinelDLL
. It contains several methods, but first let's look at the instance variables.
current
references the "current" list element, which we will need for several of the linked-list operations.sentinel
references a special list element, which we call the sentinel.The scheme is that a linked list has exactly one sentinel, along with zero or more "real" elements. For example, the list above, with the names of three states, would contain four Element
objects: the sentinel, and objects for Maine
, Idaho
, and Utah
. The picture looks like the following, where a slash indicates a null
reference:
Here, I omitted showing which Element
object is pointed to by current
. Despite how I had to draw the figure, each of these references points not to individual instance data, but rather to an entire Element
object. The sentinel's data
is a null
reference.
Notice how the list is circular, in that you can start at the sentinel and follow either forward (next
) or backward (previous
) references and eventually get back to the sentinel.
In this scheme, every linked list, even an empty one, has a sentinel. In an empty list, both references in the sentinel point to the only Element
available, namely the sentinel:
It may seem strange to have an "empty" list actually have an Element
object in it, but it turns out to really simplify some of the code. You may appreciate this simplicity later on when we examine other ways to implement linked lists.
Having seen how we intend circular, doubly linked lists with a sentinel to be represented, now we examine the methods of the Element
and SentinelDLL
classes in SentinelDLL.java. The methods for Element
are straightforward, so we won't go over them here.
The SentinelDLL
constructor makes an empty list with only the sentinel, as the diagram above shows. It also sets the instance variable current
to point to the only Element
in town, namely the sentinel. Setting the next
and previous
fields of the sentinel and setting current
are done by a call to clear
, which makes any list empty (leaving any contents for garbage collection).
String
The toString
method for a SentinelDLL
is fairly straightforward. It uses a common style of traversing a linked list by a clever for-loop header:
String result = "";
for (Element<T> x = sentinel.next; x != sentinel; x = x.next)
result += x.toString() + "\n";
return result;
The for-loop iterates through the list, starting from the first non-sentinel on the list (sentinel.next
), following next
references, and stopping when it gets back to the sentinel. It concatenates the string representation of each element in the list onto a String
named result
, returning result
at the end. Of course, this style of traversing the linked list works only within methods of the SentinelDLL
class, since the instance variables sentinel
and next
are private to their respective classes.
The add
method for a list takes an object reference obj
, and it inserts it after the Element
object referenced by the instance variable current
. Notice that we restrict obj
to be of type T
. The code manipulates references to "splice in" the new element. For example, if we start from an empty list, where current = sentinel
, and insert an element with the string Maine
, we have the following situation:
The add
method makes current
reference the new Element
object.
The splicing works the same when inserting into any position of the list. For example, starting from the 3-element list from before, we insert Ohio
after Idaho
as follows:
Let's take a careful look at how add
works. First, it makes a new Element
that references the given object, and x
references this new Element
. It is this new Element
that we will add to the list. We need to do four things:
x
's next
reference the element following the one that current
references. The assignment x.next = current.next
does so.x
's previous
reference current
. The assignment x.previous = current
does so.current
will have a new predecessor, namely the element that x
references, so we need to set the previous
instance variable of this element to reference x
's element. The assignment current.next.previous = x
does so. The expression current.next.previous
can be a bit confusing, so let's examine it carefully. current
references the current element. current.next
references the element following the one that current
references. This element has an instance variable previous
that references its predecessor (which is current
at the time that the add
method is called, but it's about to be updated). Since we want to assign to the previous
instance variable of the Element
object referenced by current.next
, we put current.next.previous
on the left-hand side of the assignment statement.current
will have a new successor, namely the element that x
references, so we set the next
instance variable of current
's element to reference x
's element. The assignment current.next = x
does so.As you can easily see from the add
code, it takes constant time to insert an element into a circular, doubly linked list with a sentinel. You can also see, by the absence of if-statements, that there are no special cases.
The remove
method for a list removes the Element
object that current
references. You never ever remove the sentinel, so the first thing we do is check whether current
references the sentinel by calling the hasCurrent
method. If current
references the sentinel (indicated by hasCurrent
returning false
), then we print an error message to System.err
, rather than to System.out
. On some systems, you can suppress regular output printed to System.out
, but you have to go to extra lengths to suppress error messages printed to System.err
. In Eclipse, when you print to System.err
, the message appears in red in the console. We want to make error messages likely to be seen.
Normally, the remove
method is not trying to remove the sentinel. We splice the current element out of the list and make current
reference its successor in the list.
For example, to remove Idaho
from the previous list:
and to remove the only element from a list:
The time to remove an element is constant. As we will see when we examine "simpler" lists, this running time is quite good; with linked lists whose representation appears simpler than that of a circular, doubly linked list with a sentinel, the time to remove an element at the ith position in the list is proportional to i.
The contains
method for a SentinelDLL
takes a reference obj
to an object of the generic type T
and looks for an element that equals obj
, according to the equals
method on the data
field of each Element
. We traverse the list, calling equals
on each element's data, until a match is found. If the contains
method finds such an element, it sets current
to reference it, so that we can next either add a new element after it or remove it.
We could check to make sure that we haven't returned to the sentinel, along with checking whether we have a match, but we use a clever way to avoid having to check that we haven't returned to the sentinel in each iteration of the loop. We put the value we're looking for in the sentinel. That way, we're guaranteed of finding it. If where we found it was the sentinel, it wasn't there in the first place. If where we found it was not the sentinel, then it really was there. We set sentinel.data
to be the same reference as obj
before traversing the list, and we make sure to put a null
back into sentinel.next
after the traversal is done, no matter where in the list the traversal stopped.
When we use the sentinel trick, the for-loop needs no body:
for (x = sentinel.next; !x.data.equals(obj); x = x.next)
;
This process is really linear search. The time to perform it depends on the time to compare two elements. If we denote this comparison time by t, and we say that the list has nelements, then the time to find a list element is proportional to tn in the worst case (when the element is not in the list). If t is a constant that can be ignored, then the worst-case time is proportional to n.
The remaining list methods are really easy. Note that the later methods use the isEmpty
, hasCurrent
, and hasNext
predicates rather than just doing the tests directly. Accessing the linked list through these methods makes changing the representation easier.
isEmpty
returns true
if and only if the only list element is the sentinel. That is the case precisely when the sentinel references itself.hasCurrent
returns true
if and only if there is a current element. That is the case precisely when current
does not reference the sentinel.hasNext
returns true if there are both a current element and another element after the current element.getFirst
sets the current
reference to the first element in the list and returns the data in the first element. If the list is empty, then current
must reference the sentinel, and its data must be null
, and so getFirst
returns null
when the list is empty.getLast
is like getFirst
except that it sets current
to reference the last element in the list and return its data.addFirst
adds a new element at the head of the list and makes it the current element.addLast
adds a new element at the tail of the list and makes it the current element.next
moves current
to current.next
and returns the data in that next element. It returns null
if there is no next element.get
returns the data in the current element, or null
if there is no current element.set
assigns to the current element, printing an error message to System.err
if there is no current element.All of the above methods are in the CS10LinkedList
interface. In addition, the SentinelDLL
class contains one method (other than the constructor) that is not in the CS10LinkedList
interface:
previous
moves current
to current.previous
and returns the data in that previous element. It returns null
if there is no previous element.SentinelDLL
classWe can use the ListTest.java program to test the SentinelDLL
class. You can use the debugger to examine the linked list if you like.
Notice that to declare and create the linked list, we specify the type that will be stored in the list. Here, it's a String
:
CS10LinkedList<String> theList = new SentinelDLL<String>();
Because theList
is declared as a reference to the interface CS10LinkedList
, we cannot call the previous
or hasPrevious
methods in this driver.
Although doubly linked circular linked lists with sentinels are the easiest linked lists to implement, they can take a lot of space. There are two references (next
and previous
) in each element, plus the sentinel node. Some applications create a huge numbers of very short linked lists. (One is hashing, which we'll see later in this course.) In such situations, the extra reference in each node and the extra node for the sentinel can take substantial space.
The code for singly linked lists has more special cases than the code for circular, doubly linked lists with a sentinel, and the time to remove an element in a singly linked list is proportional to the length of the list in the worst case rather than the constant time it takes in a circular, doubly linked list with a sentinel.
The SLL
class in SLL.java implements the CS10LinkedList
interface with a generic type T
, just as the SentinelDLL
class does. A singly linked list, as implemented in the SLL
class, has two structural differences from a circular, doubly linked list with a sentinel:
Each Element
object in a singly linked list has no backward (previous
) reference; the only navigational aid is a forward (next
) reference.
There is no sentinel, nor does the list have a circular structure. Instead, the SLL
class maintains references head
to the first element on the list and tail
to the last element on the list.
A singly linked list with Maine
, Idaho
, and Utah
would look like
A singly linked list with only one element would look like
And an empty singly linked list looks like
The file SLL.java contains the class definitions for Element
and SLL
for a singly linked list. These declarations are similar to those for circular, doubly linked lists with a sentinel. As before, Element
class is a private inner class, and all method declarations are the same. The only difference is in the instance data. We can use the same ListTest.java driver to test the singly linked list class, as long as we change the line creating the list to read
CSLinkedList<String> theList = new SLL<String>();
Let's examine the List
methods in SLL.java for singly linked lists. We will highlight those that differ from those for circular, doubly linked lists with a sentinel.
The clear
method, which is called by the SLL
constructor as well as being publicly available, makes an empty list by setting all instance variables (head
, tail
, and current
) to null
.
As before, the add
method places a new Element
object after the one that current
references. Without a special case, however, there would be no way to add an element as the new head of the list, since there is no sentinel to put a new element after. Therefore, if current
is null
, then we add the new element as the new list head.
The code, therefore, has two cases, depending on whether current
is null
. If it is, we have to make the new element reference what head
was referencing and then make head
reference the new element. Otherwise, we make the new element reference what the current element is referencing and then make current
reference the new element. If the new element is added after the last element on the list, we also have to update tail
to reference the new element.
Compare this code to the add code for a circular, doubly linked list with a sentinel. Although there is only one directional link to maintain for a singly linked list, the code has more cases and is more complex. For either implementation, however, adding an element takes constant time.
As mentioned, removing an element from a singly linked list takes time proportional to the length of the list in the worst case—in other words, time that is linear in the length of the list— which is worse than the constant time required for a circular, doubly linked list with a sentinel. Why does it take linear time, rather than constant time? The reason is that the previous
reference in a doubly linked list really helps. In order to splice out the current element, we need to know its predecessor in the list, because we have to set the next
instance variable of the predecessor to the value of current.next
. With the previous
reference, we can easily find the predecessor in constant time. With only next
references available, the only way we have to determine an element's predecessor is to traverse the list from the beginning until we find an element whose next
value references the element we want to splice out. And that traversal takes linear time in the worst case, which is when the element to be removed is at or near the end of the list.
The remove
method first checks that current
, which references the Element
object to be removed, is non-null
. If current
is null
, we print an error message and return. Normally, current
is non-null
, and the remove
method finds the predecessor pred
of the element that current
references. Even this search for the predecessor has two cases, depending on whether the element to be removed is the first one in the list. If we are removing the first element, then we set pred
to null
and update head
. Otherwise, we have to perform a linear search, stopping when pred.next
references the same element as current
; once this happens, we know that pred
is indeed the predecessor of the current element. (There is also some "defensive coding," just in case we simply do not find an element pred
such that pred.next
references the same element as current
. We do not expect this to ever happen, but if it does, we have found a grave error and so we print an error message and return.) Assuming that we find a correct predecessor, we splice out the current element. We also have to update tail
if we are removing the last element of the list.
The bottom line is that, compared with the remove
code for a circular, doubly linked list with a sentinel, the remove
code for a singly linked list is more complex, has more possibilities for error, and can take longer.
toString
for a listThe toString for a singly linked list is similar to how we print a circular, doubly linked list with a sentinel, except that now we start from head
rather than sentinel.next
and that the termination condition is not whether we come back to the sentinel but rather whether the reference we have is null
. The for-loop header, therefore, is
for (x = head; x != null; x = x.next)
The contains
method for a singly linked list is perhaps a little shorter than for a circular, doubly linked list with a sentinel, because now we do not replace the object reference in the sentinel. The for-loop header, therefore, becomes a little more complicated. We have to check whether we have run off the end of the list (which we did not have to do when we stored a reference to the object being searched for in the sentinel) and then, once we know we have not run off the end, whether the element we are looking at equals the object we want. The bodyless for-loop is
for (x = head; x != null && !x.data.equals(obj); x = x.next)
;
Although the code surrounding the for-loop simplifies with a singly linked list, the loop itself is cleaner for the circular, doubly linked list with a sentinel. Either way, it takes linear time in the worst case.
isEmpty
is easy, but slightly different from the version for a circular, doubly linked list with a sentinel. We simply return a boolean that indicates whether head
is null
.hasCurrent
returns true
if and only if the there is a current element. We simply return a boolean indicating whether current
is not null
.hasNext
checks to see whether there is a current element and whether the next field of the current element is null rather than seeing if it is the sentinel.getFirst
is different, as it sets current
to head
.getLast
changes, too, setting current
to tail
.addFirst
and addLast
are similar to a circular, doubly linked list with a sentinel. However, addLast
has to deal with an empty list separately.get
is unchanged.next
is identical to the version in the doubly linked list. (This is an advantage of calling hasNext
rather than doing the test directly in this method.)previous
and hasPrevious
methods. We are not required to, because they're not in the CS10LinkedList
interface.It is also possible to have a dummy list head, even if the list is not circular. If we do so, we can eliminate some special cases, because adding at the head becomes more similar to adding anywhere else. (Instead of changing the head
you update a next
field.) It is also possible to have current
reference the element before the element that it actually indicates, so that removal can be done in constant time. It takes a while to get used to having current
reference the element before the one that is actually "current."
It is also possible to have a circular singly linked list, either with or without a sentinel.
You have probably seen big-Oh notation before, but it's certainly worthwhile to recap it. In addition, we'll see a couple of other related asymptotic notations. Chapter 4 of the textbook covers this material as well.
Remember back to linear search and binary search. Both are algorithms to search for a value in an array with n elements. Linear search marches through the array, from index 0 through the highest index, until either the value is found in the array or we run off the end of the array. Binary search, which requires the array to be sorted, repeatedly discards half of the remaining array from consideration, considering subarrays of size n, n/2, n/4, n/8, …, 1, 0 until either the value is found in the array or the size of the remaining subarray under consideration is 0.
The worst case for linear search arises when the value being searched for is not present in the array. The algorithm examines all n positions in the array. If each test takes a constant amount of time—that is, the time per test is a constant, independent of n—then linear search takes time c1n + c2, for some constants c1 and c2. The additive term c2 reflects the work done before and after the main loop of linear search. Binary search, on the other hand, takes c3 log2 n + c4 time in the worst case, for some constants c3 and c4. (Recall that when we repeatedly halve the size of the remaining array, after at most log2 n + 1 halvings, we've gotten the size down to 1.) Base-2 logarithms arise so frequently in computer science that we have a notation for them: lg n = log2 n.
Where linear search has a linear term, binary search has a logarithmic term. Recall that lg n grows much more slowly than n; for example, when n = 1,000,000,000 (a billion), lg n is approximately 30.
If we consider only the leading terms and ignore the coefficients for running times, we can say that in the worst case, linear search's running time "grows like" n and binary search's running time "grows like" lg n. This notion of "grows like" is the essence of the running time. Computer scientists use it so frequently that we have a special notation for it: "big-Oh" notation, which we write as "O-notation."
For example, the running time of linear search is always at most some linear function of the input size n. Ignoring the coefficients and low-order terms, we write that the running time of linear search is O(n). You can read the O-notation as "order." In other words, O(n) is read as "order n." You'll also hear it spoken as "big-Oh of n" or even just "Oh of n."
Similarly, the running time of binary search is always at most some logarithmic function of the input size n. Again ignoring the coefficients and low-order terms, we write that the running time of binary search is O(lg n), which we would say as "order log n," "big-Oh of log n," or "Oh of log n."
In fact, within our O-notation, if the base of a logarithm is a constant (such as 2), then it doesn't really matter. That's because of the formula
$\displaystyle\log_a n = \frac{\log_b n}{\log_b a}$
for all positive real numbers a, b, and c. In other words, if we compare loga n and logb n, they differ by a factor of logb a, and this factor is a constant if a and b are constants. Therefore, even when we use the "lg" notation within O-notation, it's irrelevant that we're really using base-2 logarithms.
O-notation is used for what we call "asymptotic upper bounds." By "asymptotic" we mean "as the argument (n) gets large." By "upper bound" we mean that O-notation gives us a bound from above on how high the rate of growth is.
Here's the technical definition of O-notation, which will underscore both the "asymptotic" and "upper-bound" notions:
A running time is O(n) if there exist positive constants n0 and c such that for all problem sizes n ≥ n0, the running time for a problem of size n is at most cn.
Here's a helpful picture:
The "asymptotic" part comes from our requirement that we care only about what happens at or to the right of n0, i.e., when n is large. The "upper bound" part comes from the running time being at most cn. The running time can be less than cn, and it can even be a lot less. What we require is that there exists some constant c such that for sufficiently large n, the running time is bounded from above by cn.
For an arbitrary function f(n), which is not necessarily linear, we extend our technical definition:
A running time is O(f(n)) if there exist positive constants n0 and c such that for all problem sizes n ≥ n0, the running time for a problem of size n is at most c f(n).
A picture:
Now we require that there exist some constant c such that for sufficiently large n, the running time is bounded from above by c f(n)
Actually, O-notation applies to functions, not just to running times. But since our running times will be expressed as functions of the input size n, we can express running times using O-notation.
In general, we want as slow a rate of growth as possible, since if the running time grows slowly, that means that the algorithm is relatively fast for larger problem sizes.
We usually focus on the worst case running time, for several reasons:
You might think that it would make sense to focus on the "average case" rather than the worst case, which is exceptional. And sometimes we do focus on the average case. But often it makes little sense. First, you have to determine just what is the average case for the problem at hand. Suppose we're searching. In some situations, you find what you're looking for early. For example, a video database will put the titles most often viewed where a search will find them quickly. In some situations, you find what you're looking for on average halfway through all the data…for example, a linear search with all search values equally likely. In some situations, you usually don't find what you're looking for…like at Radio Shack.
It is also often true that the average case is about as bad as the worst case. Because the worst case is usually easier to identify than the average case, we focus on the worst case.
Computer scientists use notations analogous to O-notation for "asymptotic lower bounds" (i.e., the running time grows at least this fast) and "asymptotically tight bounds" (i.e., the running time is within a constant factor of some function). We use Ω-notation (that's the Greek leter "omega") to say that the function grows "at least this fast". It is almost the same as Big-Oh notation, except that is has an "at least" instead of an "at most":
A running time is Ω(f(n)) if there exist positive constants n0 and c such that for all problem sizes n ≥ n0, the running time for a problem of size n is at least c f(n).
We use Θ-notation (that's the Greek letter "theta") for asymptotically tight bounds:
A running time is Θ(f(n)) if there exist positive constants n0, c1, and c2 such that for all problem sizes n ≥ n0, the running time for a problem of size n is at least c1 f(n) and at most c2 f(n).
Pictorially,
In other words, with Θ-notation, for sufficiently large problem sizes, we have nailed the running time to within a constant factor. As with O-notation, we can ignore low-order terms and constant coefficients in Θ-notation.
Note that Θ-notation subsumes O-notation in that
If a running time is Θ(f(n)), then it is also O(f(n)).
The converse (O(f(n)) implies Θ(f(n))) does not necessarily hold.
The general term that we use for O-notation, Θ-notation, and Ω-notation is asymptotic notation.
Asymptotic notations provide ways to characterize the rate of growth of a function f(n). For our purposes, the function f(n) describes the running time of an algorithm, but it really could be any old function. Asymptotic notation describes what happens as n gets large; we don't care about small values of n. We use O-notation to bound the rate of growth from above to within a constant factor, and we use Θ-notation to bound the rate of growth to within constant factors from both above and below. (We won't use Ω-notation much in this course.)
We need to understand when we can apply each asymptotic notation. For example, in the worst case, linear search runs in time proportional to the input size n; we can say that linear search's worst-case running time is Θ(n). It would also be correct, but slightly less precise, to say that linear search's worst-case running time is O(n). Because in the best case, linear search finds what it's looking for in the first array position it checks, we cannot say that linear search's running time is Θ(n) in all cases. But we can say that linear search's running time is O(n) in all cases, since it never takes longer than some constant times the input size n.
Although the definitions of O-notation an Θ-notation may seem a bit daunting, these notations actually make our lives easier in practice. There are two ways in which they simplify our lives.
I won't go through the math that follows in class. You may read it, in the context of the formal definitions of O-notation and Θ-notation, if you wish. For now, the main thing is to get comfortable with the ways that asymptotic notation makes working with a function's rate of growth easier.
Constant multiplicative factors are "absorbed" by the multiplicative constants in O-notation (c) and Θ-notation (c1 and c2). For example, the function 1000 n2 is Θ(n2) since we can choose both c1 and c2 to be 1000.
Although we may care about constant multiplicative factors in practice, we focus on the rate of growth when we analyze algorithms, and the constant factors don't matter. Asymptotic notation is a great way to suppress constant factors.
When we add or subtract low-order terms, they disappear when using asymptotic notation. For example, consider the function n2 + 1000 n. I claim that this function is Θ(n2). Clearly, if I choose c1 = 1, then I have n2 + 1000 n ≥ c1 n2, and so this side of the inequality is taken care of.
The other side is a bit tougher. I need to find a constant c2 such that for sufficiently large n, I'll get that n2 + 1000 n ≤ c2 n2. Subtracting n2 from both sides gives 1000 n ≤ c2 n2 − n2 = (c2 − 1) n2. Dividing both sides by (c2 − 1) n gives $\displaystyle \frac{1000}{c_2 - 1} \leq n$. Now I have some flexibility, which I'll use as follows. I pick c2 = 2, so that the inequality becomes $\displaystyle \frac{1000}{2-1} \leq n$, or 1000 ≤ n. Now I'm in good shape, because I have shown that if I choose n0 = 1000 and c2 = 2, then for all n ≥ n0, I have 1000 ≤ n, which we saw is equivalent to n2 + 1000 n ≤ c2 n2.
The point of this example is to show that adding or subtracting low-order terms just changes the n0 that we use. In our practical use of asymptotic notation, we can just drop low-order terms.
In combination, constant factors and low-order terms don't matter. If we see a function like 1000 n2 − 200 n, we can ignore the low-order term 200 n and the constant factor 1000, and therefore we can say that 1000 n2 − 200 n is Θ(n2).
As we have seen, we use O-notation for asymptotic upper bounds and Θ-notation for asymptotically tight bounds. Θ-notation is more precise than O-notation. Therefore, we prefer to use Θ-notation whenever it's appropriate to do so.
We shall see times, however, in which we cannot say that a running time is tight to within a constant factor both above and below. Sometimes, we can bound a running time only from above. In other words, we might only be able to say that the running time is no worse than a certain function of n, but it might be better. In such cases, we'll have to use O-notation, which is perfect for such situations.