from Coding Blocks .NET , on 8/12/2016 , played: 691 time(s)
In this episode we give a general overview of caching, where it's used, why it's used, and what the differences in hardware implementations mean in terms we can understand. This will be foundational to understanding caching at a software level in an upcoming episode. There's also something about the number 37 that may be the most important number to remember...ever... The original version of the shownotes can be found at: http://www.codingblocks.net/episode45 This episode's survey: Podcast News Thanks for your patience, we had a couple of rough audio situations - and we appreciate you sticking with us! iTunes Reviews Hedgehog, Thiagoramos.ai, Btn1992, Jonajonlee, UndeadCodemonkey, zmckinnon, hillsidecruzr, Dibjibjub, ddurose Stitcher Reviews pchtsp, rafaelh, CK142, TheMiddleMan124, LocalJoost 60 TB SSD!!! https://www.engadget.com/2016/08/09/seagate-60tb-ssd/ Samsung 950 PRO Series - 512GB PCIe NVMe http://amzn.to/2b0adZt Full Review http://www.anandtech.com/show/9702/samsung-950-pro-ssd-review-256gb-512gb Ready Player One (fiction book) http://amzn.to/2aPaCi3 Clean Code episodes coming soon + book giveaway - Stay Tuned! Caching: Turtles all the way down Turtles all the way down??? https://en.wikipedia.org/wiki/Turtles_all_the_way_down Storing a subset of information for faster retrieval The hit ratio dramatically increases as the cache size increases Think about a simple web request… Browser cache DNS cache ISP caching CDN Whatever your application is doing (redis, framework, database, etc) PLUS whatever the various computers are doing Why don’t we cache everything? Fast is expensive! Cache Invalidation is hard! Caching at the hardware level Latency Numbers Every Programmer Should Know https://gist.github.com/jboner/2841832 Relative Memory Access Interactive Demo http://www.overbyte.com.au/misc/Lesson3/CacheFun.html Caching is a strategy that computers use going all the way down to the processor L1 .5ns As quick as a it gets, how long it takes light to travel 6" Managed by the CPU itself, no assembly available! L2 7ns 14 x slower than L1 L3 / L4 / Scratch etc Main Memory Have numbers for a “reference” and a 1mb sequential read 100ns - 250,000ns 14 - 35,714 x slower than L2 200 - 500,000 x slower than L1 Network Sending is quick, there are numbers for that In general, a lot of variability here Same datacenter 500,000 ns 2 x slower than Main Memory 1 million times slower than L1 SSD Wait, network faster than the hd??? Yes, but no 1mb sequential 1 million ns 2 x slower than Network 2 million x slower than L1 Spinning Disk Get your employer to get you an ssd! 1mb sequential read 20 million ns 20 x slower than SSD 40 million x slower than L1 Internet Rough gauge of internet speeds Highly variable (CDN + ISP caching, for example), but gives you a sense of scale 150 million ns 7.5 x slower than spinning disk 300 million times slower than L1 In more relatable terms. 1 second for L1 Cache 5 days for memory 11 days for data center 23 days for SSD 15 months for HD Almost 10 years for internet! Think about how those numbers cache RAM / Application cache Local Hard drive Network storage Cache Server DB Summary Hope we gave you a good idea of the importance and scale of caching in computing at the hardware level Things we didn’t talk about coming in a future episode: Application / Software caching and caching algorithms Resources we Like Latency Numbers Every Programmer Should Know https://gist.github.com/jboner/2841832 How L1 and L2 caching work http://www.extremetech.com/extreme/188776-how-l1-and-l2-cpu-caches-work-and-why-theyre-an-essential-part-of-modern-chips Relative Memory Access Interactive Demo http://www.overbyte.com.