Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Grokking Algorithms by Aditya Y. Bhargava text book's algorithms codes

NotificationsYou must be signed in to change notification settings

mayura-andrew/GrokkingAlgorithms

Repository files navigation

This README provides code examples and explanations for the algorithms covered in the book "Grokking Algorithms" by Aditya Y. Bhargava. These algorithms are essential for understanding and solving various computer science and programming problems.

Table of Contents

Binary Search

Binary search is a fundamental algorithm for efficiently searching for an element in a sorted array.It divides the search space in half with each step.Binary search is a lot faster than simple search.O(log n) is faster than O(n), but it gets a lot faster once the list of items you're searching through grows.Algorithm speed isn't measured in seconds.Algorithm times are measured in terms of growth of an algorithm.Algorithm times are written in Big O notation.

# Python code example for binary searchdefbinary_search(list,target):low=0high=len(list)-1while (low<=high):mid= (low+high)//2guess=list[mid]ifguess==item:returnmidifguess>item:high=mid-1else:low=mid+1returnNone## Time Complexity: O(log n) - Logarithmic

Simple Search

deflinear_search(arr,target):foriinrange(len(arr)):ifarr[i]==target:returni# Target element found at index ireturn-1# Target element not found# Example usagearr= [2,4,6,8,10,12,14]target=10result=linear_search(arr,target)ifresult!=-1:print(f"Element{target} found at index{result}")else:print(f"Element{target} not found in the array")# Time Complexity: O(n) - Linear

Selection Sort

Selection sort is a neat algorithm, but it's not very fast. Quicksort is a faster sorting algorithm that only takes O(n log n) time.

Arrays allows fast reads. - O(1)
List [Read] - O(n)Linked lists allow fast inserts and deletes - O(1)Arrays[Insert && Delete] - O(n)

All elements in the array should be the same type (all ints, all doubles, and so on).

deffindSmallest(arr):smallest=arr[0]# stores the smallest vlauesmallest_index=0# stores the index of the smallest valueforiinrange(1,len(arr)):ifarr[i]<smallest:smallest=arr[i]smallest_index=iprint(smallest)returnsmallest_indexdefselectionSort(arr):newArr= []foriinrange(len(arr)):smallest=findSmallest(arr)newArr.append(arr.pop(smallest))returnnewArr# Time Complexity: O(n2)

Recursion

Recursion is when a function calls itself.Every recursive function has two case: the base case amd the recursive case.A stack has two operations: push and pop;The stack plays a big part in recursion.All functions calls go onto the call stack.The call stack can get very large, which takes up a lot of memory.

deffactorial(n):if (n==1):return1# Base Caseelse:returnn*factorial(n-1)# recursion# Time Complexity: O(n) - Linear

Quick Sort

Divide and Conquer (D&C) works by breaking a problem down into smaller and smaller pieces. If you're using D&C on a list, the base case is probably an empty array or an array with on element.If you're implementing quicksort, choose a random element as the pivot. The average runtime of quicksort is O(n log n)!The constant in Big O notation can matter sometimes. That's why quicksort faster than merge sort.The constant almost never matters for simple search versus binary search, because O(log n) is so much faster than 0(n) when your list gets big.

defquickSort(array):iflen(array)<2:returnarrayelse:pivot=array[0]less= [iforiinarray[1:]ifi<=pivot]# sub-array of all the elements less than the pivotgreater= [iforiinarray[1:]ifi>pivot]# sub-array of all the elements greater than the pivotreturnquickSort(less)+ [pivot]+quickSort(greater)# Time complexity - O(n log n)# worst case - O(n^2)# average case - O(n log n)# Quick Sort is unique because its speed depends on the pivot you choose.

Hash Tables

#phone_book = dict()#or# python dictionary == hash tabels# key and value pairphone_book= {}phone_book["jenny"]=898343phone_book["emergency"]=911print(phone_book["jenny"])print(phone_book["emergency"])voted= {}defcheck_voter(name):ifvoted.get(name):print("kick them out")else:voted[name]=Trueprint("Let them vote")check_voter("mike")check_voter("tom")check_voter("tom")# Simple cachecache= {}defget_page(url):ifcache.get(url):returncache[url]else:data=get_data_from_server(url)cache[url]=datareturndata# hashes good ofr caching'''You'll almost never have to implement a hash table yourself.modeling relationships from one thing to another thingfiltering out duplicatescaching/memorizing data instead of making your server do workhashes tables have really fast search, insert, and deleteonce your load factor is greater thatn .07, its times to resize your hash table.hash tables are great for catching duplicatesAverage case of hash - O(1)  constant time      hash tables hash tables Arrays Linked Lists       (avarage)    (worst)search O(1)         O(n)        O(1)     O(n)insert O(1)         O(n)        O(n)     O(1)delete O(1)         O(n)        O(n)     O(1)'''

Breadth-First Search

Graphs are a way to model how different things are connected to one another. It consists of several nodes.To find the shortest path, there are two questions that breadth-first search can answer for you:

Quection type 1 - Is there a path from node A to node B ?Quection type 2 - What is the shortest path from node A to node B ?

Queues - there are two operations in queues, enqueue and dequeueENQUEUE - Add an item to the queue == PUSHDEQUEUE - Take and item off the queue == POP

The queue is called a FIFO data structure: First In, First OutThe Stack is a LIFO data structure: Last In, First Out

fromcollectionsimportdeque# Define the graph with relationshipsgraph= {"you": ["alice","bob","claire"],"bob": ["anuj","peggy"],"claire": ["thom","jonny"],"anuj": [],"peggy": [],"thom": [],"jonny": []}defsearch(name):search_queue=deque()search_queue+=graph[name]searched= []whilesearch_queue:person=search_queue.popleft()ifpersonnotinsearchedandpersoningraph:ifperson_is_seller(person):print(person+" is a mango seller!")returnTrueelse:search_queue+=graph[person]searched.append(person)returnFalsedefperson_is_seller(name):returnname[-1]=='m'# Check if the name ends with 'mango'search("you")'''Breadth-first search takes O(number of people + number of edges), and it's more commonly written as O(V+E) V for number of vertices, E for number of edges.Breadth-first search tells you if there's a path from A to B.If there's path, breadth-first search-first will find the shortest path.If you have a problem like "find the shortest X", try modeling your problem as a graph, and use breadth-first  search to solve.A directed graph has arrows, and the relationship follows the direction of the arrow (rama -> adit means "rama owes adit  money").undirected graphs don't have arrows, and the relationship goes both ways, (ross - rachel means "ross dated rachel and rachel dated ross").You need to check people in the order they were added to the search list, so the search list needs to be a queue. Otherwise, you won't get the shortest path.Once you check someone, make sure you don't check them again.  Otherwise, you might end up in an infinite loop.'''

Dijkstra's Algorithm

Steps :

  1. Find the cheapest node. This is the node you can get to in the least amount of time.
  2. Check whether there's a cheaper path to the neighbors of this node. If so, update their costs.
  3. Repeat until you've done this for every node in the graph.
  4. Calculate the final path.

When you work with Dijkdtra's algorithm, each edge in the graph has a number associated with it. These are called weights.

A graph with weights is called a weighted graph. A graph without weights is called an unweighted graph.

To calculate the shortest path in an unweighted graph, use breadth-first search.To calculate the shortest path in a weighted graph, use Dijkstra's algorithm.Graphs can also have cycles.

An undirected graph means that both nodes point to each other. That's a cycle!With an undirected graph, each edge adds another cycle. Dijkstra's algorithm only works with directed acyclic graphs, called DAGs for short.

graph= {}graph["you"]= ["alice","bob","claire"]graph["start"]= {}graph["start"]["a"]=6graph["start"]["b"]=2graph["a"]= {}graph["a"]["fin"]=1graph["b"]= {}graph["b"]["a"]=3graph["b"]["fin"]=5graph["fin"]= {}infinity=float("inf")costs= {}costs["a"]=6costs["b"]=2costs["fin"]=infinityparents= {}parents["a"]="start"parents["b"]="start"parents["fin"]=Noneprocessed= []node=find_lowest_cost_node(costs)whilenodeinnotNone:cost=costs[node]neighbors=graph[node]forninneighbors.keys():new_cost=cost+neighbors[n]ifcosts[n]>new_cost:costs[n]=new_costparents[n]=find_lowest_cost_nodeprocessed.append(node)node=find_lowest_cost_node(costs)deffind_lowest_cost_node(costs):lowest_cost=float("inf")lowest_cost_node=Nonefornodeincosts:cost=costs[nodes]ifcost<lowest_costandnodenotinprocessed:lowest_cost=costlowest_cost_node=nodereturnfind_lowest_cost_node'''> Breadth-first search is used  to calculate the shortest path for an unweighted graph> Dijkstra's algorithm is used to calculate the shortest path for a weighted graph.> Dijkstra's algorithm works when all the weights are positive.> If you have negative weights, use the Bellman-Ford algorithm.'''

Greedy Algorithms

Greedy algorithms are a class of algorithms that make locally optimal choices at each step with the hope of finding a global optimum. These algorithms are straightforward to implement and are often used for optimization problems. However, they may not always guarantee the best possible solution.Greedy algorithms are easy to write and fast to run, so they make good approximation algorithms.If you have an NP-complete problem, your best bet is to use an approximation algorithm.

deffractional_knapsack(items,capacity):items.sort(key=lambdax:x[1]/x[0],reverse=True)total_value=0current_weight=0foriteminitems:ifcurrent_weight+item[0]<=capacity:total_value+=item[1]current_weight+=item[0]else:fraction= (capacity-current_weight)/item[0]total_value+=fraction*item[1]breakreturntotal_valueitems= [(2,10), (3,5), (5,15), (7,7), (1,6)]capacity=10max_value=fractional_knapsack(items,capacity)print(f"Maximum value in knapsack:{max_value}")

Dynamic Programming

Dynamic programming is useful when you're trying to optimize something given a constraint.You can use dynamic programming when the problem can be broken into discrete subproblems.Every dynamic programming solution involves a grid.The values in the cells are usally what you're trying to optimize.Each cell is a subproblem, so think about how you can divide your problem into subproblems.There's no single formula for calculating a dynamic-programmming solution.

Credits goes to Aditya Y. Bhargava - grokking algorithms Book

About

Grokking Algorithms by Aditya Y. Bhargava text book's algorithms codes

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2026 Movatter.jp