Overall, after reading the entire book, I feel that it is: (1) rich in information, (2) has a smooth reading curve, and (3) has excellent translation quality. It is definitely worth a second or even multiple readings.
From the perspective of information density, the so-called rich information is measured. Despite its thin appearance, the lightweight book weighs heavily upon initial inspection, and has only 180 pages and a thickness of less than one centimeter. However, after reading it in its entirety, there is a strong sense of contrast, which comes from the dense concentration of knowledge. The book includes six chapters, an afterword, a glossary, references, and further reading. The chapters introduce algorithm summaries, graph algorithms, search algorithms, sorting algorithms, PageRank algorithms, and deep learning algorithms, covering several general application areas of computer algorithms in a relatively common manner. Among them, graph, sorting, search, and deep learning are major categories of algorithms. Each chapter introduces representative specific algorithms, while PageRank is a search ranking algorithm that Google has publicly released a long time ago, which comprehensively use graph theory and matrix calculation theories.
Up until now, the content arrangement of this book is comparable to that of a typical data structure and algorithm book. However, it is only when readers immerse themselves in the text that they will discover its peculiarities: (1) The coverage of knowledge is very extensive, which combines computer algorithms with unexpected practical scenarios. For example, the algorithm summary section in Chapter 1 lists the modulo operation in musical scores or composition beats; the graph algorithms section in Chapter 2 lists the DNA fragment circuit problem and the championship tournament scheduling problem; the search section in Chapter 3 lists the inspiration of the Matthew effect for search optimization, as well as the best stopping problem and secretary problem derived from the astronomer Keplers selection of a marriage partner; the sorting section in Chapter 4 introduces selection sort, insertion sort, radix sort, quicksort, and merge sort through examples of poker and height arrangements; Chapter 5 explains the core mechanism of Googles webpage ranking, where all the billions of web pages crawled are viewed as nodes, and the links between them are viewed as edges. The whole World Wide Web becomes a directed, multi-weighted graph. The measurement of weight introduces the concept of reverse links, and the general idea is that nodes (web pages) with higher total weight rank higher. Then, matrix operations are used to calculate the weight of all nodes, which requires a review of linear algebra; (2) The book includes rich footnotes, references, and further reading. For example:
As opposed to most books on data structures and algorithms that focus on code implementation and mining of problem-solving skills. This book is deep in a general sense, but extremely broad in its scope. Its objective is to expand readers understanding of generalized algorithms, to broaden programmers algorithmic perspectives, and to deepen everyones understanding of the intrinsic nature of their toolkit abilities. Finally, the fascinating afterword section fully describes the operating principle of the Turing machine, and reproduces the working steps of the Turing machine using a mathematical example. The Turing machine is a foundational model in theoretical computer science, a superset of the current Von Neumann architecture universal computer, and a tangible embodiment of the mechanization or electronicization of algorithms from thinking. Advanced programming languages are formal representations of the Turing machine. The Turing machine connects machines to logical and numerical operations (computational mathematics) and can essentially convert all reasoning into computational problems, but it is uncertain if it can completely encompass logical and mathematical operations.
相关推荐
© 2023-2025 百科书库. All Rights Reserved.
发表评价