This is really helpful! Very reminiscent of Fireship's 100 seconds videos, I can totally see you becoming big like that. Also the animation is great.
Hashmaps and Dictionaries are not the same thing. Yes a hashmap is a dictionary but a dictionary is not a hashmap. You can implement a dictionary with just about any data structure that lets you insert and remove data. e.g. an array of key-value structs. You can simply traverse using linear searching find the key you're looking for. It's not efficient but it works and is a dictionary. A hashmap on the other hand uses a hash function with the key is a parameter to compute a hashcode. The code is used to decided where to store the data in your chosen data structure.
What can I do to make my future videos better? Any feedback (even negative) is greatly appreciated!
I'm a bit tired of hearing hashmaps are "O(1)" when any implementation I've seen is dependent on the hash function as well as the collision handling. To my understanding, those two cannot possibly be implemented in O(1) unless you have an internal array large enough to uniquely cover every possible hash value thus avoiding collisions.
ah yes, smoking hash to do fast coding
0:12 lmao, I got the meme references (except Mike, who's Mike?) 0:19 to anyone at this part, please note that he said "interpreters". Compiled langs (like Rust and C) don't need hashmaps to lookup vars, because all the variables that will ever be used are known and declared statically at compile-time, so the compiler can replace vars with raw pointers to memory addresses, completely invalidating the need for a "middle-man". Dynamic langs like Py and JS usually require hashmaps, because the halting-problem prevents the bytecode-compiler from predicting assignments and declarations
these videos are so high quality, im genuinely surprised you dont have more clout
I had normally imagined they were done with trees and therefore O(log n) for both reading and writing. That allows for scaling up without reorganising as it grow bigger. The next question then is what size is used when creating the new object? Too large and creating a lot of them uses lots of memory. Too small and it has to rejig itself a few times if more and more elements are added. I assume the size used would go up by powers of 2 or 4, putting a limit on the proportion of 'wasted' space. Although that extra space makes collisions less frequent. What is the most efficient algo may depend on the ratio of reads & writes, not to mention knowledge of how many elements there are going to be.
These videos are great. Keep them coming.
From my understanding, most practical implementation of hash maps, append additional colliding key-value pairs, as nodes of a linked list...? This has the advantage of not requiring any costly reallocations when inserting a value under a colliding key, whilst still managing to provide a close to `O(1)` time complexity, even with the random memory access...
Hey these videos are great. What software do you use to create them??
The Next Fireship channel keep up ❤❤
Good content. Only improvement I suggest is mic quality. It sounds a bit like you are way too close to the microphone
Would creating an array within an array be the same as created a linked list within an array? Since they’re both essentially serving the same function and end up having to linearly search their respective nodes/arrays
Try speaking a bit louder please, it will help
Just food for thought on your next video: "dictionary vs inline cache"
Love the ethereal sounding voice
So, it still performs a linear search?
Whens next video, one tip it sounds like you're whispering from behind me lol
@mojacodes