Hello
everyone,
This week the lectures are kind of reviewing in the sense that it goes over things that I am using already but explicitly explains why and how. We talked about the assigned value of a name and that it references to the address of an object at a particular memory location. There are different scopes to the names: local, nonlocal, global, and built-in. There are variation and difference in accessing and using methods. There are the tracing of recursive functions. We also talked about the efficiency of function by using a Fibonacci function as an example to explain. Quick sort is also involved in assisting us understand the efficiency of a program and how to make it efficient. Nonetheless to say, I am well aware of these concepts and ideas from my java experience, but it is always nice to have a nice refreshment once in a while.
Next up, I am going to talk about three of the well-known sorts in programming: they are selection sort, merge sort, and quick sort.
For following is based on 2 assumptions: n is the length of the list to be sorted, and the lists are to be sorted in non-decreasing order.
Selection sort is of O(n2). Each time it traverses through the list to find the largest element and then put it in the proper position. The sort does the aforementioned traverses exactly n times in total, and each time stops before the proper location where the list is already sorted. It is the least efficient of the three sorts mentioned here.
Merge sort is of O(nlog2n). It uses the divide-and-conquer strategy. It recursively divides the list into two parts of same length or one part being one more than the other. Then when the process reaches a base case, which is a list of length one, it is returned to its previous call in the recursive steps where the two separated lists will be sorted (essentially deciding which one will be in front of the other) and then merged into one big list. This merging process goes on until the last merge where the entire original list will be merged and consequently sorted.
Quick sort is also of O(nlog2n). Similar to merge sort, quick sort also uses the divide-and-conquer strategy. Instead of dividing the list into two parts of (almost) the same length, this sort partitions the list into two parts depending on the pivot value. The pivot value can be chosen in various ways, for example, it can just be the first element in the list. Every element in the front partition is less or equal to the pivot and the second partitions consist of any element larger than the pivot with pivot not being in any of the partitions. This is done by initially looking at the first (denoted i) and last (denoted j) element of the list and swap when necessary. Specifically, if i > pivot > j, swap i and j; if i ≤ pivot > j, j = j-1; if i > pivot < j, i = i+1; if I ≤ pivot < j, i = i+1 and j = j+1. These ‘if’ statements are executed until i is bigger or equal to j. Then, quick sort is called on each of the two partitions, and since they are not put into separate lists, they do not need to be merged in the end since they are always together; only i and j are changing.
In theory and in practical data where the pivot is not chosen to be something of the extreme (the lowest or the highest value), it can pretty much outperform the merge sort since it does not use extra storage; however, if the aforementioned extreme case does occur, quick sort will be of its worst case of O(n2), which means merge sort is more stable than quick sort.
I sincerely thank you whoever read my blog. Although I do not know who you are (or if there even is any), I hope you will have a nice life with no regret and blah blah blah blah…. And yeah…So, good luck and have fun.
Thank you for reading,
pilpho
This week the lectures are kind of reviewing in the sense that it goes over things that I am using already but explicitly explains why and how. We talked about the assigned value of a name and that it references to the address of an object at a particular memory location. There are different scopes to the names: local, nonlocal, global, and built-in. There are variation and difference in accessing and using methods. There are the tracing of recursive functions. We also talked about the efficiency of function by using a Fibonacci function as an example to explain. Quick sort is also involved in assisting us understand the efficiency of a program and how to make it efficient. Nonetheless to say, I am well aware of these concepts and ideas from my java experience, but it is always nice to have a nice refreshment once in a while.
Next up, I am going to talk about three of the well-known sorts in programming: they are selection sort, merge sort, and quick sort.
For following is based on 2 assumptions: n is the length of the list to be sorted, and the lists are to be sorted in non-decreasing order.
Selection sort is of O(n2). Each time it traverses through the list to find the largest element and then put it in the proper position. The sort does the aforementioned traverses exactly n times in total, and each time stops before the proper location where the list is already sorted. It is the least efficient of the three sorts mentioned here.
Merge sort is of O(nlog2n). It uses the divide-and-conquer strategy. It recursively divides the list into two parts of same length or one part being one more than the other. Then when the process reaches a base case, which is a list of length one, it is returned to its previous call in the recursive steps where the two separated lists will be sorted (essentially deciding which one will be in front of the other) and then merged into one big list. This merging process goes on until the last merge where the entire original list will be merged and consequently sorted.
Quick sort is also of O(nlog2n). Similar to merge sort, quick sort also uses the divide-and-conquer strategy. Instead of dividing the list into two parts of (almost) the same length, this sort partitions the list into two parts depending on the pivot value. The pivot value can be chosen in various ways, for example, it can just be the first element in the list. Every element in the front partition is less or equal to the pivot and the second partitions consist of any element larger than the pivot with pivot not being in any of the partitions. This is done by initially looking at the first (denoted i) and last (denoted j) element of the list and swap when necessary. Specifically, if i > pivot > j, swap i and j; if i ≤ pivot > j, j = j-1; if i > pivot < j, i = i+1; if I ≤ pivot < j, i = i+1 and j = j+1. These ‘if’ statements are executed until i is bigger or equal to j. Then, quick sort is called on each of the two partitions, and since they are not put into separate lists, they do not need to be merged in the end since they are always together; only i and j are changing.
In theory and in practical data where the pivot is not chosen to be something of the extreme (the lowest or the highest value), it can pretty much outperform the merge sort since it does not use extra storage; however, if the aforementioned extreme case does occur, quick sort will be of its worst case of O(n2), which means merge sort is more stable than quick sort.
I sincerely thank you whoever read my blog. Although I do not know who you are (or if there even is any), I hope you will have a nice life with no regret and blah blah blah blah…. And yeah…So, good luck and have fun.
Thank you for reading,
pilpho