Blockchain Fundamentals

Blockchain is a distributed data storage solution secured by cryptography. It consists of blocks linked together in a chain, where each block contains data, a timestamp, and a hash of the previous block.

Structure and Security

  • Block Structure:

    • Contains data, a timestamp, and the hash of the previous block.
    • Hash is computed based on all data in the block, ensuring integrity.
  • Cryptographic Security:

    • Any alteration to data in a block changes its hash.
    • Chaining blocks via hashes makes tampering detectable across the chain.

Mining and Consensus

  • Mining Process:

    • Miners compete to find a hash with specific properties (e.g., leading zeros).
    • Involves adjusting a nonce in the block’s data until the desired hash is found.
  • Hash Properties:

    • Hash functions ensure that small changes in input data produce significantly different hashes.
    • A valid block (“mined”) is one that meets the criteria (e.g., starts with zeros).

Bitcoin Example

  • Bitcoin Mining:
    • Difficulty adjusts periodically to maintain an average block mining time (~10 minutes).
    • Miners aim to meet the current difficulty by adjusting their nonce.

Conclusion

Blockchain’s security and reliability stem from its cryptographic hashing and decentralized consensus mechanisms. It ensures data integrity and trust in distributed systems.

Bitcoin: A Peer-to-Peer Electronic Cash System

Peer-to-Peer Network

  • Bitcoin operates on a decentralized peer-to-peer network without a central authority or server.
  • Each participant (node) maintains a complete record of account balances and transaction history.

Blockchain Technology

  • Blockchain consists of a series of data blocks cryptographically linked together.
  • Each block contains transaction data and the hash of the previous block, forming an immutable chain.
  • New blocks are added through Proof-of-Work, ensuring difficulty in creation but ease of verification.

Proof-of-Work

  • Mining relies on computationally intensive Proof-of-Work algorithms (SHA-256 hash function).
  • Miners compete to find a nonce that produces a hash value meeting specified criteria to add a new block.

Transaction Verification and Confirmation

  • Transactions are broadcasted to the network, verified, and added to the mempool.
  • Miners select transactions from the mempool, validate them, and include them in new blocks added to the blockchain.

Currency Issuance and Distribution

  • New bitcoins are issued through mining rewards, decreasing over time as mining difficulty increases.
  • Total supply is capped at 21 million bitcoins, establishing scarcity and value.

Anonymity and Security

  • Bitcoin transactions use public-key cryptography for security and privacy.
  • Users transact with anonymous addresses, protecting their identities.

Decentralization Advantages and Challenges

  • Decentralization enhances resistance to censorship and security against single points of failure.
  • Challenges include scalability, energy consumption, and transaction speed.

Reference(Bitcoin): EN Edition file pdf

Reference(Bitcoin): CN Edition file pdf

Simple Aged Cache (Project)

Video1: Testing Fundamentals

Why Testing is Important

  • Test Early and Often: Testing should occur early and frequently in software development to ensure high quality and testability.
  • User-Centered Testing: Testing ensures the product meets user expectations and requirements, facilitating clear communication among stakeholders.

Types of Testing

  1. Unit Tests:

    • Low-level tests that validate individual methods or functions.
    • Focus on different paths, conditions, and logic within a single code unit.
  2. Integration Tests:

    • Checks interactions between different modules or components.
    • Examples include API calls, database connections, and inter-module communication.
  3. Acceptance Tests:

    • High-level tests that validate software from the user’s perspective.
    • Typically derived from user stories or requirements, such as user registration functionality.

Testing Approaches

  • Outside-In Testing:

    • Starts with acceptance tests and gradually dives into unit tests.
    • Suitable for applications like web apps with clear configurations and CRUD operations.
  • Inside-Out Testing:

    • Starts with unit tests and gradually expands to integration and acceptance tests.
    • Suitable for backend systems with complex business logic.

Code Coverage

  • Statement Coverage: Ensures each method is tested.
  • Branch Coverage: Tests different method call chains.
  • Conditional Coverage: Tests if-else logic and guard conditions.
  • Path Coverage: Ensures coverage of system execution flow, including edge cases.

Refactoring

  • Importance of Refactoring:

    • Ensures code evolves with changing requirements and improves over time.
    • Follows the “Red-Green-Refactor” cycle: write tests first, write code to pass tests, refactor for improved design.
  • Related Books:

    • Refactoring
    • Design Patterns
    • Refactoring to Patterns

Video2: Test Doubles in Testing

In software engineering, Test Doubles are crucial concepts that help effectively conduct unit, integration, and acceptance testing.

  • Stub:

    • Replaces real components, returns predefined responses.
  • Mock:

    • Provides programmable behavior to verify interactions in tests.
  • Spy:

    • Records actual operations for subsequent verification.
  • Fake:

    • Substitute implementation used to enhance test efficiency.
  • Dummy:

    • Placeholder typically used to satisfy method parameter requirements.

These Test Doubles aid in isolation and mocking during testing, ensuring focused and speedy tests that verify code behavior and interactions.


Video3: Simple Aged Cache Project Launch Video

To kickstart the “Simple Aged Cache” assignment project, here’s an overview of key steps and technology stack:

  1. Project Download and Setup:

    • Download project code from the provided GitHub URL.
    • Review the README for basic project information and setup guidelines.
  2. Project Structure and Setup:

    • Extract downloaded project files and place them in a suitable working directory, e.g., workspace folder in the user directory.
  3. Environment Setup:

    • Use command-line tools like Terminal to execute Gradle commands within the project directory.
    • Gradle serves as the build tool, managing dependencies and executing the test suite with commands like gradlew clean build.
  4. Integrated Development Environment (IDE):

    • Recommend using IntelliJ IDEA as the primary IDE.
    • Import the project into IntelliJ, ensuring correct JDK and Gradle environment settings.
  5. Continuous Integration:

    • The project may include GitHub Actions configuration files (e.g., build.yml) for continuous integration and automated builds on GitHub.
  6. Technology Stack and Tools:

    • Use JVM technology stack, with options to implement in Java or Kotlin.
    • Gradle manages project dependencies and build processes.
    • JUnit serves as the testing framework for writing and executing unit tests.

Project: Simple Aged Cache Project Overview

The Simple Aged Cache project is designed as an exercise to introduce modern software engineering practices, particularly focusing on test-driven development (TDD). It serves as an educational tool to understand the fundamentals of building scalable software systems, particularly those leveraging big data concepts.

Background

Initially developed in the early 2000s at a telecommunications company, the Simple Aged Cache has evolved into a learning tool for software engineering interviews. It emphasizes TDD and has been used successfully in interview processes at various companies, including Gnip and Twitter.

Project Details

The project aims to implement a basic cache system with the following requirements:

  • Constructor Variations: Includes constructors with and without a Clock parameter.
  • Method Definitions: Key methods such as put, isEmpty, size, and get are defined to manage cache entries.
  • Exercise Objectives: Participants are tasked with making the provided tests pass, exploring concepts like ExpirableEntry and avoiding built-in collection classes.

Implementation Choices

Participants are encouraged to explore different implementations using Java and Kotlin, and to consider the use of an inner class for ExpirableEntry instead of traditional collection classes.

Learning Objectives

  • Understanding the importance of TDD in software development.
  • Implementing basic data structures and algorithms for caching purposes.
  • Exploring alternative approaches to common programming tasks.

Getting Started

To begin working on the Simple Aged Cache project:

  1. Download the Codebase: Visit Simple Aged Cache to obtain the project files.

  2. Setup Environment: Ensure you have Java or Kotlin installed, along with Gradle for managing dependencies and building the project.

  3. Explore and Implement: Follow the instructions provided in the README to start implementing and testing your solution.

  4. Feedback and Improvement: Use TDD principles to iteratively improve your implementation, aiming to pass the provided test cases.

Conclusion

The Simple Aged Cache project not only serves as a practical exercise in software engineering but also highlights the iterative nature of software development through TDD. By focusing on building scalable and maintainable systems, participants gain valuable insights into modern software architecture principles.

For more details, refer to the Simple Aged Cache Workshop provided by Initial Capacity.

© 2024 Initial Capacity, Inc. All rights reserved.

No Sliver Bullet

Overview of “No Silver Bullet”

  • Author: Frederick P. Brooks, Jr.
  • Other Works: “The Mythical Man-Month”
  • Publication: Second edition available, easily found online.

Key Concepts

  • Essence vs. Accidents: Brooks differentiates between the intrinsic difficulties (essence) of software engineering
    and the incidental difficulties (accidents).
    • Essence: Inherent complexities of the software itself.
    • Accidents: Difficulties related to the production of software.

Essence of Software

  • Complexity:
    • Each software system is unique; no two systems are the same.
    • Unlike manufacturing identical cars, software systems have unique architectures and requirements.
  • Conformity:
    • Software must conform to various external factors, such as new leadership decisions or regulatory changes.
    • Example: Switching from RabbitMQ to Kafka based on new leadership’s preference.
  • Changeability:
    • Software is designed for change, not just replaceability.
    • Constantly evolving requirements necessitate frequent changes in software.
  • Invisibility:
    • Software lacks a visible blueprint like a house.
    • Tools like UML and whiteboards attempt to visualize software, but much remains invisible.
    • The inherent invisibility makes it challenging to fully grasp the software’s structure.

Improving Software Development

  • Iterative Development:
    • Emphasizing small, incremental improvements.
  • Great People:
    • Having talented individuals is crucial for addressing the essence of software.
    • Training the next generation of designers, including software engineers and data scientists, is essential.

Key Takeaway

  • No Significant Breakthrough:
    • Despite various advancements, there has been no single technology or practice that has improved software
      development productivity by an order of magnitude in the past decade.
    • The challenges of software engineering remain deeply rooted in its essence.

Potential Quiz Topics

  1. Essence vs. Accidents:

    • Define the difference between essence and accidents in software engineering.
    • Provide examples of each.
  2. Complexity:

    • Explain why software complexity is different from manufacturing complexity.
    • Describe the unique challenges of software complexity.
  3. Conformity:

    • Discuss how conformity impacts software development.
    • Give an example of conformity affecting software choices.
  4. Changeability:

    • Explain why changeability is a fundamental aspect of software.
    • Compare changeability in software to other fields (e.g., automobile recalls).
  5. Invisibility:

    • Describe the challenges of software invisibility.
    • Explain why tools like UML and whiteboards are insufficient.
  6. Iterative Development and Great People:

    • Discuss the importance of iterative development in improving software.
    • Explain why having great people is crucial for addressing software essence.
  7. No Significant Breakthrough:

    • Summarize Brooks’s conclusion about the lack of a “silver bullet” in software engineering.

Reference: EN Edition file pdf

Reference: CN Edition file pdf

Master Theorem

Introduction and Background

  • The Master Theorem is a powerful tool in the field of computer science, specifically in the analysis of algorithms. It is used to determine the time complexity of recursive algorithms, particularly those which follow a ‘divide and conquer’ approach12.

  • In divide and conquer algorithms, a problem is broken down into several smaller subproblems of the same type. These smaller problems are then solved independently and the solutions are combined to solve the original problem. Recursion is a common principle in such algorithms. Typical examples of divide and conquer algorithms include the binary search, merge sort, and quick sort algorithms3.

  • The Master Theorem provides a systematic method to calculate the time complexity of such divide and conquer algorithms. It can immediately give the time complexity in terms of Big O notation, thus providing a simple way to predict the efficiency of the algorithm34.

  • The theorem was first presented in the textbook “Introduction to Algorithms” by Cormen, Leiserson, Rivest, and Stein1. Since then, it has remained an integral part of the study of algorithms due to its practical use in analyzing the time complexity of recursive functions234.

Methodology and Results

Divide and conquer algorithm

Divide and conquer algorithm is a problem-solving method that decomposes a problem into several smaller, structurally similar subproblems, solves these subproblems recursively, and then merges their solutions to obtain the solution to the original problem.

How to set up recursive relations

The overall idea for solving the problem is to establish a recurrence relation. Through the recurrence relation, the problem can be decomposed into smaller subproblems, and the solution to the original problem can be ultimately obtained.

Recurrence relations are usually represented in mathematical form, where T(n) represents the time taken to process an input of size n.

  • Base Case: When the problem size is a small constant, the processing time is usually constant, represented as Θ(1). This is the basic termination condition for recursion.

  • Recursive Case: When the problem size exceeds the base case, the recurrence relation consists of two parts:

    • aT(n/b): Represents the time taken to process a total of a subproblems of size n/b recursively.
    • fdiv(n): Represents the time taken for the divide step, i.e., decomposing the problem into subproblems.
    • fcomb(n): Represents the time taken for the combine step, i.e., merging the solutions of subproblems.
  • Meanings of Parameters:

    • In aT(n/b), a represents the number of subproblems after decomposition, and T(n/b) represents the time taken to process a single subproblem.
    • fdiv(n) represents the time taken for the divide step, usually related to the problem size n.
    • fcomb(n) represents the time taken for the combine step, also usually related to the problem size n.
  • Recurrence Relation:

    • The entire recurrence relation describes the process of decomposition, processing, and merging of the problem, as well as the time consumed by each step.

The divide and conquer algorithm and recursive relations

  • The relationship between the divide and conquer algorithm and recursive relations is closely intertwined. The divide and conquer algorithm is a method of breaking down a problem into smaller subproblems and recursively solving them. In the divide and conquer algorithm, the original problem is decomposed into multiple subproblems, and then the final solution is obtained by recursively solving these subproblems.

  • Recursive relations describe the relationship between the size of the problem and its solution. In the divide and conquer algorithm, recursive relations are used to represent the process of decomposing the problem into subproblems and to calculate the time and resources required to solve these subproblems. Recursive relations are typically expressed in recursive form, where the size of the problem is a function of the size of the problem itself.

  • By solving recursive relations, we can determine the time and space complexity of the divide and conquer algorithm. Through analyzing recursive relations, we can understand how the size of the problem decreases with each recursion and determine the time and space consumption of each recursive step. This helps us evaluate the efficiency of the algorithm and provides guidance for solving the problem.

  • Therefore, the relationship between the divide and conquer algorithm and recursive relations is that the divide and conquer algorithm solves the original problem by recursively solving subproblems, and recursive relations are used to describe the relationship between the size of the problem and its solution.

Expansion Method (Standard Method)

Expansion method is a commonly used approach for solving recursive relations. It decomposes the problem into smaller subproblems by expanding the recursive relations multiple times, and gradually obtains the solution to the original problem.

  • Step 1: Expansion of Recursive Relations
    Firstly, expand the recursive relations to obtain the solution for each subproblem, and express them in the form of recursive expressions.
  • Step 2: Construction of Recurrence Tree
    Construct a recurrence tree with the expanded recursive expressions, where each level of the tree represents a different recursion level, and each node represents a solution to a subproblem.
  • Step 3: Solving the Recurrence Tree
    Analyze and solve the recurrence tree to obtain the analytical solution or recurrence relation of the problem, thereby determining the algorithm’s time complexity.
  • Step 4: Verification
    Finally, verify whether the obtained time complexity satisfies the original recursive relations to ensure the correctness of the analytical solution.

Examples of merge sort and maximum subarray problem

  • Algorithm Concept

    • Divide the original array into two subarrays, solve the maximum subarray problem for each subarray separately, and then merge them to obtain the maximum subarray of the original array.
    • The problem can be recursively solved until the size of the array is small enough to be solved directly.
  • Process of Divide and Conquer Algorithm

    • Divide: Divide the original array into two subarrays and solve the maximum subarray problem for each subarray separately.
    • Combine: Merge the maximum subarrays of the left and right subarrays to obtain the maximum subarray across the middle position.
    • Recurrence: Recursively solve the maximum subarray problem for the left and right subarrays until the size of the array is small enough to be solved directly.
  • Application of Expansion Method

    • Use the expansion method to analyze the time complexity of the divide and conquer algorithm. Firstly, expand the recursive relations, then construct the recursion tree, and finally solve the recursion tree to obtain the time complexity of the algorithm.
  • Analysis of Time Complexity

    • Time Complexity of Divide Step: Typically Θ(1).
    • Time Complexity of Combine Step: Typically Θ(n).
    • Time Complexity of Recursive Calls: O(logn).
    • Combining the above steps, the time complexity of the maximum subarray problem is O(nlogn).
  • Efficiency Analysis of the Algorithm

    • The divide and conquer algorithm for the maximum subarray problem is efficient when handling large-scale arrays because it decomposes the problem into smaller subproblems, solves them recursively, and then merges the solutions.
    • The algorithm’s time complexity is O(nlogn), suitable for handling large-scale arrays.

Master theorem (Custom Method)

The Master Theorem is a method used to solve recursive relations, particularly applicable to recursive relations in divide and conquer algorithms. It provides a quick way to calculate the time complexity of recursive algorithms without the need for detailed expansion and solving of recursive relations.

The Master Theorem is applicable to recursive relations of the following form:

T(n) = aT(n/b) + f(n)

Here, ‘a’ is the number of recursive steps, ‘n/b’ is the size of each recursive step, and ‘f(n)’ represents the workload other than the recursive steps.

The basic idea of the Master Theorem is to determine the time complexity of the recursive algorithm by comparing the number of recursive steps ‘a’, the ratio of sizes ‘n/b’, and the growth rate of the workload ‘f(n)’.

The Master Theorem has three cases:

  1. If f(n) = O(n^c), where c < log_b(a), then the time complexity of the recursive algorithm is T(n) = Θ(n^log_b(a)).
  2. If f(n) = Θ(n^c log^k n), where k ≥ 0 and c = log_b(a), then the time complexity of the recursive algorithm is T(n) = Θ(n^c log^(k+1) n).
  3. If f(n) = Ω(n^c), where c > log_b(a), and af(n/b) ≤ kf(n) (for some constant k < 1 and sufficiently large n), then the time complexity of the recursive algorithm is T(n) = Θ(f(n)).

It’s important to note that the Master Theorem is only applicable to specific forms of recursive relations, and may not provide accurate time complexity for other forms of recursive relations.

Discussion and Conclusion

In this lesson, the instructor introduced

  • how to solve recursive relations, focusing on divide-and-conquer algorithm.
  • Demonstrated relationship between divide-and-conquer algorithm and recursive relations through examples.
  • Explained setting up recursive relations.
  • Introduced standard method for solving recursive relations, known as expansion method.
  • Showed solving recursive relations by expanding them multiple times.
  • Provided solutions for merge sort and maximum subarray problem.
  • Mentioned solution method for recursive relations in general context, called master theorem.
  • Master theorem offers solutions to recursive relations based on different scenarios.

Reference

1 Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. “Introduction to Algorithms,” Third Edition. The MIT Press, 2009.
2 Michael T. Goodrich, Roberto Tamassia, Michael H. Goldwasser. “Data Structures and Algorithms in Python,” Wiley, 2013.
3 Sedgewick, Robert, Wayne, Kevin. “Algorithms,” Fourth Edition. Addison-Wesley Professional, 2011.
4 Dasgupta, Papadimitriou, Vazirani. “Algorithms,” McGraw-Hill Science/Engineering/Math, 2008.

Karatsuba Algorithm

Introduction to Karatsuba Algorithm

  • The Karatsuba algorithm, a foundational method for performing efficient multiplication of large numbers in computer science1. Taught in Week 1 of the Dynamic Programming and Greedy Algorithms course at CU Boulder, the algorithm stands as a testament to the power of a divide-and-conquer strategy.

  • The Karatsuba algorithm, named after Anatolii Alexeevitch Karatsuba, strategically breaks down the task of multiplying two n-digit numbers into simpler subtasks, avoiding the traditional computational complexity of multiplication. This approach illustrates the multiplicative properties of numbers and the time-saving potentials of recursion and divide-and-conquer techniques in algorithm development.

  • In the following sections, we will delve into the mechanics of the Karatsuba algorithm, discuss its complexity, and examine applications where this algorithm truly shines.

The Mechanics of Karatsuba Algorithm

The Karatsuba algorithm commences multiplication by splitting both numbers into two halves.

The upper half (most significant digits) is labeled as ‘a’ and ‘c’, and the lower half (least significant digits) is correspondingly ‘b’ and ‘d’.

The algorithm breaks the traditional multiplication into three main multiplications, namely, ac, bd, and (a+b)(c+d).

Calculation of (a+b)(c+d) results in obtaining the sum of ac, bd, and a pair of cross products (ad+bc).

To receive the final result, ac is shifted left by m digits (where m is the number of digits in smaller half), and bd stays unaltered.

This makes the algorithm faster than classical multiplication for sufficiently large numbers.

Karatsuba Multiplication Example

Here is an example that demonstrates how to use Karatsuba’s algorithm to calculate the multiplication of larger numbers. Let’s take the example of multiplying two 4-digit numbers: let’s say we want to calculate 1234 * 5678.

Input:

Let’s consider two numbers:

  • a = 1234
  • b = 5678

Step 1: Splitting the Numbers

  1. Split each number into two halves:
    • a = 12, 34
    • b = 56, 78

Step 2: Calculate Three Products

  1. Calculate ac: 12 * 56 = 672
  2. Calculate bd: 34 * 78 = 2652
  3. Calculate (a+b)(c+d): (12+34)(56+78) = 46 * 134 = 6164

Step 3: Calculate Cross Products

  1. Subtract ac and bd from (a+b)(c+d):
    • 6164 - 672 = 5492
    • 5492 - 2652 = 2840

Step 4: Combine Results

  1. Shift ac left by the number of digits in smaller half:
    • Shift 672 left by 2 digits: 67200
  2. Add the results:
    • ac * 10^4 + (ad+bc) * 10^2 + bd = 67200 + 2840 * 100 + 2652 = 7006652

Output:

The product of 1234 and 5678 is 7006652.

Code Resources

Wrapping Up the Discussion

In this course, we delved into the operations of addition and multiplication for large numbers. Large numbers refer to numbers longer than the computer’s word length, and they find wide applications in various real-world scenarios, particularly in fields like cryptography.

  • Addition Algorithm

    • The addition algorithm involves adding digits starting from the least significant bit and considering carryovers. If there’s a carryover, it gets added to the next digit’s calculation. Eventually, we obtain the sum of two large numbers.
  • Multiplication Algorithm

    • For multiplication, we employ a method known as “vertical multiplication.” Each digit of one large number is multiplied with every digit of the other, and the results are added up. Carryovers are handled similarly to addition. Ultimately, we arrive at the product of two large numbers.
  • Karatsuba Algorithm

    • In addition to the traditional multiplication algorithm, we also introduced the Karatsuba algorithm. The Karatsuba algorithm utilizes the divide-and-conquer approach, breaking down the multiplication problem of large numbers into smaller subproblems. By recursively solving these subproblems, the final result is obtained. This algorithm can significantly reduce the number of multiplication operations in certain cases, thus enhancing efficiency.

These algorithms not only effectively handle operations with large numbers but are also frequently required in fields like cryptography. Mastering these algorithms not only deepens our understanding of numerical operations but also allows us to apply this knowledge to solve practical engineering and scientific problems.

Reference

1 Karatsuba, Anatolii Alexeevitch. “The Complexity of Multiplication in Generic Algebras.” Proceedings of Steklov Institute of Mathematics 211 (1995): 169-183.

Hello Hexo

Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

Quick Start

Create a new post

1
2
$ hexo new "My New Post"
$ hexo new post "My New Post" # (recommend)

More info: Writing

Run server

1
$ hexo server

More info: Server

Generate static files

1
$ hexo generate

More info: Generating

Deploy to remote sites

1
$ hexo deploy

More info: Deployment