• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

software defined storage

Testing vSphere Virtual SAN in your virtual lab with vSphere 5.5

Duncan Epping · Sep 2, 2013 ·

For those who want to start testing the beta of vSphere Virtual SAN in their lab with vSphere 5.5 I figured it would make sense to describe how I created my nested lab. (Do note that performance will be far from optimal) I am not going to describe how to install ESXi nested as there are a billion articles out there that describe how to do that.I suggest creating ESXi hosts with 3 disks each and a minimum of 5GB of memory per host:

  • Disk 1 – 5GB
  • Disk 2 – 20GB
  • Disk 3 – 200GB

After you have installed ESXi and imported a vCenter Server Appliance (my preference for lab usage, so easy and fast to set up!) you add your ESXi hosts to your vCenter Server. Note to the vCenter Server NOT to a Cluster yet.

Login via SSH to each of your ESXi hosts and run the following commands:

  • esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba2:C0:T0:L0 –option “enable_local enable_ssd”
  • esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba3:C0:T0:L0 –option “enable_local”
  • esxcli storage core claiming reclaim -d mpx.vmhba2:C0:T0:L0
  • esxcli storage core claiming reclaim -d mpx.vmhba3:C0:T0:L0

These two commands ensure that the disks are seen as “local” disks by Virtual SAN and that the “20GB” disk is seen as an “SSD”, although it isn’t using an SSD. There is another option which might even be better, you can simply add a VMX setting to specify the disks are SSDs. Check William’s awesome blog post for the how to.

After running these two commands we will need to make sure the hosts are configured properly for Virtual SAN. First we will add them to our vCenter Server, but without adding them to a cluster! So just add them on a Datacenter level.

Now we will properly configure the host. We will need to create an additional VMkernel adapter, do this for each of the three hosts:

  1. Click on your host within the web client
  2. Click “Manage” -> “Networking” -> “VMkernel Adapters”
  3. Click the “Add host networking” icon
  4. Select “VMkernel Network Adapter”
  5. Select the correct vSwitch
  6. Provide an IP-Address and tick the “Virtual SAN” traffic tickbox!
  7. Next -> Next -> Finish

When this is configured for all three hosts, configure a cluster:

  1. Click your “Datacenter” object
  2. On the “Getting started” tab click “Create a cluster”
  3. Give the cluster a name and tick the “Turn On” tickbox for Virtual SAN
  4. Also enable HA and DRS if required

Now you should be able to move your hosts in to the cluster. With the Web Client for vSphere 5.5 you can simply drag and drop the hosts one by one in to the cluster. VSAN will now be automatically configured for these hosts… Nice right. When all configuration tasks are completed just click on your Cluster object and then “Manage” -> “Settings” -> “Virtual SAN”. Now you should see the amount of hosts part of the VSAN cluster, number of SSDs and number of data disks.

Now before you get started there is one thing you will need to do, and that is enable “VM Storage Policies” on your cluster / hosts. You can do this via the Web Client as follows:

  • Click the “home” icon
  • Click “VM Storage Policies”
  • Click the little policy icon with the green checkmark, second from the left
  • Select your cluster and click “Enable” and then close

Now note that you have enabled VM Storage Policies, there are no pre-defined policies. Yes there is a “default policy”, but you can only see that on the command line. For those interested just open up an SSH session and run the following command:

~ # esxcli vsan policy getdefault
Policy Class Policy Value
------------ --------------------------------------------------------
cluster (("hostFailuresToTolerate" i1) )
vdisk (("hostFailuresToTolerate" i1) )
vmnamespace (("hostFailuresToTolerate" i1) )
vmswap (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))
~ #

Now this means that in the case of “hostFailuresToTolerate”, Virtual SAN can tolerate a 1 host failure before you potentially lose data. In other words, in a 3 node cluster you will have 2 copies of your data and a witness. Now if you would like to have N+2 resilience instead of N+1 it is fairly straight forward. You do the following:

  • Click the “home” icon
  • Click “VM Storage Policies”
  • Click the “New VM Storage Policy” icon
  • Give it a name, I used “N+2 resiliency” and click “Next”
  • Click “Next” on Rule-Sets and select a vendor, which will be “vSan”
  • Now click <add capability> and select “Number of failures to tolerate” and set it to 2 and click “Next”
  • Click “Next” -> “Finish”

That is it for creating a new profile. Of course you can make these as complex as you want, their are various other options like “Number of disk stripes” and “Flash read cache reservation %”. For now I wouldn’t recommend tweaking these too much unless you absolutely understand the impact of changing these.

In order to use the profile you will go to an existing virtual machine and you right click it and do the following:

  • Click “All vCenter Actions”
  • Click “VM Storage Service Policies”
  • Click “Manage VM Storage Policies”
  • Select the appropriate policy on “Home VM Storage Policy” and do not forget to hit the “Apply to disks” button
  • Click OK

Now the new policy will be applied to your virtual machine and its disk objects! Also while deploying a new virtual machine you can in the provisioning workflow immediately select the correct policy so that it is deployed in a correct fashion.

These are some of the basics for testing VSAN in a virtual environment… now register and get ready to play!

Introduction to VMware Virtual SAN (vSAN)

Duncan Epping · Aug 26, 2013 ·

VMware Virtual SAN, or I should say VMware vSAN, has been around since August 2013. Back then it was indeed called Virtual SAN, today is it is officially known as vSAN, but that is what most people used anyway. As this article keeps popping up on google search I figured I would rewrite it and provide a better more generic introduction to vSAN which is up to date and covers all that VMware vSAN is about up to the current version of writing, which is VMware vSAN 6.6.

VMware vSAN is a software based distributed storage solution. Some will refer to it as hyper-converged, others will call it software defined storage and some even referred to is as hypervisor converged at some point. The reason for this is simple, VMware vSAN is fully integrated with VMware vSphere. Those of you who are vSphere administrators who are reading this will have no problem configuring vSAN. If you know how to enable HA and DRS, then you know how to configure vSAN. Of course you will need to have a vSAN Network, and you achieve this by creating a VMkernel interface and enabling vSAN on it. vSAN works with L2 and L3 networks, and as of vSAN 6.6 no longer requires multicast to be enabled on the network. (If you want to know what changed with vSAN 6.6 read this article.)

enable vsan

Before we will get a bit more in to the weeds, what are the benefits of a solution like vSAN? What are the key selling points?

  • Software defined – Use industry standard hardware, as long as it is on the HCL you are good to go!
  • Flexible – Scale as needed and when needed. Just add more disks or add more hosts, yes both scale-up and scale-out are possible.
  • Simplicity – Ridiculously easy to manage! Ever tried implementing or managing some of the storage solutions out there? If you did, you know what I am getting at.
  • Automated – Per virtual machine and per virtual disk policy based management. Yes, even VMDK level granularity. No more policies defined on a per LUN/Datastore level, but at the level where you need it!
  • Hyper-Converged – It allows you to create dense / building block style solutions!

To me “simplicity” is the key reason customers buy vSAN. Not just simplicity in configuring or installing, but even more so simplicity in management. Features like the vSAN Health Check provide a lot of value to the admin. With one glance you can see what the status is of your vSAN. Is it healthy or not? If not, what is wrong?

vsan health check

Okay that sounds great right, but where does that fit in? What are the use-cases for vSAN, how are our 7000+ customers using it today?

  • Production / Business Critical Workloads
    • Exchange, Oracle, SQL, anything basically…. This is what the majority of customers use vSAN for.
  • Management Clusters
    • Isolate their management workloads completely, and remove the dependency on your storage systems to be available. Even when your enterprise storage system is down you have access to your management tools
  • DMZ
    • Where NSX helps isolating a DMZ from the world from a networking/security point of view, vSAN can do the same from a storage point of view. Create a separate cluster and avoid having your production storage go down during a denial of service attack, and avoid complex isolated SAN segments!
  • Virtual desktops
    • Scale out model, using predictive (performance etc) repeatable infrastructure blocks lowers costs and simplifies operations. Note that vSAN is included with Horizon Advanced and Enterprise!
  • Test & Dev
    • Avoids acquisition of expensive storage (lowers TCO), fast time to provision, easy scale out and up when required!
  • Big Data
    • Scale out model with high bandwidth capabilities, Hadoop workloads are not uncommon on vSAN!
  • Disaster recovery target
    • Cheap DR solution, enabled through a feature like vSphere Replication that allows you to replicate to any storage platform. Other options are of course VAIO based replication mechanisms like Dell/EMC Recover Point.

Yes that is a long list of use cases, I guess it it fair to say that vSAN fit everywhere and anywhere! Now, lets get a bit more technical, just a bit as this is an introduction and for those who want to know more about specific features and settings I have hundreds of vSAN articles on my blog. Also a vSAN book available, and then there’s of course the long list of articles by the likes of William Lam and Cormac Hogan.

When vSAN is enabled a single shared datastore is presented to all hosts which are part of the vSAN enabled cluster. Typically all hosts will contribute performance (SSD) and capacity (magnetic disks or flash) to this shared datastore. This means that when your cluster grows from a compute perspective, your datastore will typically grow with it. (Not a requirement, there can be hosts in the cluster which just consume the datastore!) Note that there are some requirements for hosts which want to contribute storage. Each host will require at least one flash device for caching and one capacity device. From a clustering perspective, vSAN supports the same limits as vSphere: 64 hosts in a single cluster. Unless you are creating a stretched cluster, then the limit is 31 hosts. (15 per site.)

As can be expected from any recent storage system, vSAN heavily relies on flash for performance. Every write I/O will go to the flash cache first, and eventually they will go to the capacity tier. vSAN supports different types of flash devices, broadest support in the industry, ranging from SATA SSDs to 3D XPoint NVMe based devices. This goes for both the caching as well as the capacity tier. Note that for the capacity layer, vSAN of course also supports regular spinning disks. This ranges from NL-SAS to SAS, 7200 RPM to 15k RPM. Just check the vSAN Ready Node HCL or the vSAN Component HCL for what is supported and what is not.

As mentioned, you can set policies on a per virtual machine or even virtual disk level. These policies define availability and performance aspects of your workloads. But for instance also allow you to specify whether checksumming needs to be enabled or not. There are 2 key features which are not policy driven at this point and these are “Deduplication and Compression” and Encryption. Both of these are enabled on a cluster level. But lets get back to the the policy based management. Before deploying your first VMs, you will typically create a (or multiple) policy. In this policy you define what the characteristics of the workload should be. For instance as shown in the example below, how many failures should the VM be able to tolerate? In the below example it shows that “primary” and “secondary” level of failures to tolerate is set to 1. Which in this case means the VM is stretched across 2 locations and also protected by RAID-5 in each site as the “Failure Tolerance Method” is also specified.

vsan policy

The above is a rather complex example, it can be as simple as only setting “Failures to tolerate” to “1”, which in reality is what most people do. This means you will need 3 nodes at a minimum and you will from a VM perspective have 2 copies of the data and 1 witness. vSAN is often referred to as a generic object based storage platform, but what does that mean? The VM can be seen as an object and each copy of the data and the witness can be seen as components. Objects are placed and distributed across the cluster as specified in your policy. As such vSAN does not require a local RAID set, just a bunch of local disks which can be attached to a passthrough disk controller. Now, whether you defined a 1 host failure to tolerate, or for instance a 3 host failure to tolerate, vSAN will ensure enough replicas of your objects are created within the cluster. Is this awesome or what?

Lets take a simple example to illustrate that as I realize it is also easy to get lost in all these technical terms. We have configured a 1 host failure and we create a new virtual disk. This results in vSAN creating 2 identical data components and a witness component. The witness is there just in case something happens to your cluster and to help you decide who will take control in case of a failure, the witness is not a copy of your data component let that be clear, it is just a quorum mechanis. Note, that the amount of hosts in your cluster could potentially limit the amount of “host failures to tolerate”. In other words, in a 3 node cluster you can not create an object that is configured with 2 “host failures to tolerate” as it would require vSAN to place components on 5 hosts at a minimum. (Cormac has a simple table for it here.) Difficult to visualize? Well this is what it would look like on a high level for a virtual disk which tolerates 1 host failure:

First, lets point out that the VM from a compute perspective does not need to be aligned with the data components. In order to provide optimal performance vSAN has an in memory read cache which is used to serve the most recent blocks from memory. Of course blocks which are not in the memory cache will need to be fetched from either of the two hosts that serve the data component. Note that a given block always comes from the same host for reads. This to optimize the flash based read cache. For writes it is straight forward. Every write is synchronously pushed to the hosts that contain data components for that VM. Some may refer to this as replication or mirroring. With all this replication going on, are there requirements for networking? At a minimum vSAN will require a dedicated 1Gbps NIC port for hybrid configurations, and 10GbE for all-flash configurations. Needless to say, but 10Gbps is definitely preferred with solutions like these, and you should always have an additional NIC port available for resiliency. There is no requirement from a virtual switch perspective, you can use either the Distributed Switch or the plain old vSwitch, both will work fine, the Distributed Switch is recommended and comes included with the vSAN license.

So what else is there, well from a feature / functionality perspective there’s a lot. Let me list some of my favourite features:

  • RAID-1 / RAID-5 / RAID-6
  • Stretched Clustering
  • All-Flash for all License options
  • Deduplication and Compression
  • vSAN Datastore Encryption
  • iSCSI Targets (for physical machines)

That more or less covers the basics and I think is a decent introduction to vSAN. Something that hopefully sparks your interest in this distributed storage platform that is deeply integrated with vSphere and enables convergence of compute and storage resources as never seen before. It provides virtual machine and virtual disk level granularity through policy based management. It allows you to control availability, performance and security in a way I have never seen it before, simple and efficient. And then I haven’t even spoken about features like the Health Check, Config Assist, Easy Install and any of the other cool features that are part of vSAN 6.6.

If there are any questions, find me on twitter!

Startup intro: SolidFire

Duncan Epping · Jun 27, 2013 ·

This seems to becoming a true series, introducing startups… Now in the case of SolidFire I am not really sure if I should use the word startup as they have been around since 2010. But then again, it is not a consumer solution that they’ve created and enterprise storage platforms do typically take a lot longer to develop and mature. SolidFire was founded in 2010 by Dave Wright who discovered a gap in the current storage market when he was working for Rackspace. The opportunity Dave saw was in the Quality of Service area. Not many storage solutions out there could provide a predictable performance in almost every scenario, and were designed for multi-tenancy and offered a rich API. Back then the term Software Defined Storage wasn’t coined yet, but I guess it is fair to say that is how we would describe it today. This actually how I got in touch with SolidFire. I wrote various articles on the topic of Software Defined Storage, and tweeted about this topic many times, and SolidFire was one of the companies who consistently joined the conversation. So what is SolidFire about?

SolidFire is a storage company, they sell a storage systems and today they offer two models namely the SF3010 and the SF6010. What is the difference between these two? Cache and capacity! With the SF3010 you get 72Gb of cache per node and it uses 300GB SSD’s where the SF6010 gives you 144GB of cache per node and uses 600GB SSD’s. Interesting? Well to a certain point I would say, SolidFire isn’t really about the hardware if you ask me. It is about what is inside the box, or boxes I should say as the starting point is always 5 nodes. So what is inside?

Architecture

SolidFire’s architecture is based on a scale-out model and of course flash, in the form of SSD. You start out with 5 nodes and you can go up to 100 nodes, all connected to your hosts via iSCSI. Those 100 nodes would be able to provide you 5 million IOps and about 2.1 Petabyte of capacity. Each node that is added linearly scales performance and of course adds capacity. Of course SolidFire offers deduplication, compression and thin provisioning. Considering it is a scale-out model it is probably not needed to point this out, but dedupe and compression are cluster wide. Now the nice thing about the SolidFire architecture is that they don’t use a traditional RAID, this means that the long rebuild times when a disk fails or a node fails do not apply to SolidFire. Rather SolidFire evenly distributes data across all disk and nodes, so when a single disk fails or even a node fails rebuild time is not constraint due to a limited amount of resources but many components can help in parallel to get back to a normal state. What I liked most about their architecture is that it already closely aligns with VMware’s Virtual Volume (VVOL) concept, SolidFire is prepared for VVOLs when it is released.

Quality of Service

I already has briefly mentioned this, but Quality of Service (QoS) is one of the key drivers of the SolidFire solution. It revolves around having the ability to provide an X amount of capacity with an X amount of performance (IOps). What does this mean? SolidFire allows you to specify a minimum and maximum number of IOps for a volume, and also a burst space. Lets quote the SolidFire website as I think they explain it in a clear way:

  • Min IOPS – The minimum number of I/O operations per-second that are always available to the volume, ensuring a guaranteed level of performance even in failure conditions.
  • Max IOPS – The maximum number of sustained I/O operations per-second that a volume can process over an extended period of time.
  • Burst IOPS – The maximum number of I/O operations per-second that a volume will be allowed to process during a spike in demand, particularly effective for data migration, large file transfers, database checkpoints, and other uneven latency sensitive workloads.

Now I do want to point out here that SolidFire storage systems have no “form of admission control” when it comes to QoS. Although it is mentioned that there is a guaranteed level of performance this is up to the administrator, you as the admin will need to do the math and not overprovision from a performance point of view if you truly want to guarantee a specific performance level. If you do, you will need to take failure scenarios in to account!

One thing that my automation friends William Lam and Alan Renouf will like is that you can manage all these settings using their REST-based API.

(VMware) Integration

Ofcourse during the conversation integration came up. SolidFire is all about enabling their customers to automate as much as they possibly can and have implemented a REST-based API. They are heavily investing in for instance integration with Openstack but also with VMware. They offer full support for the vSphere Storage APIs – Storage Awareness (VASA) and are also working towards full support for vSphere Storage APIs – Array Integration (VAAI). Currently not all VAAI primitives are supported but they promised me that this is a matter of time. (They support: Block Zero’ing, Space Reclamation, Thin Provisioning. See HCL for more details.) On top of that they are also looking at the future and going full steam ahead when it comes to Virtual Volumes. Obvious question from my side: what about replication / SRM? This is being worked on, hopefully more news about this soon!

Now with all this integration did they forget about what is sitting in between their storage system and the compute resources? In other words what are they doing with the network?

Software Defined Networking?

I can be short, no they did not forget about the network. SolidFire is partnering with Plexxi and Arista to provide a great end-to-end experience when it comes to building a storage environment. Where with Arista currently the focus is more on monitoring the the different layers Plexxi seems to focus more on the configuration and optimization for performance aspect. No end-to-end QoS yet, but a great step forward if you ask me! I can see this being expanded in the future

Wrapping up

I had already briefly looked at SolidFire after the various tweets we exchanged but this proper introduction has really opened my eyes. I am impressed by what SolidFire has achieved in a relatively short amount of time. Their solution is all about customer experience, that could be performance related or the ability to automate the full storage provisioning process… their architecture / concept caters for this. I have definitely added them to my list of storage vendors to visit at VMworld, and I am hoping that those who are looking in to Software Defined Storage solutions will do the same as SolidFire belongs on that list.

Startup Intro: Infinio

Duncan Epping · Jun 20, 2013 ·

Infinio is demo’ing their brand new product today at Tech Field Day #9. I was briefed by Infinio a couple of weeks back and figured I would share some details with you. Infinio is releasing a product called Infinio Accelerator and describes it as a “downloadable storage performance” solution. That sounds nice, but what does that mean?

Infinio has developed a virtual appliance that sits in between your virtual machine storage traffic and your NFS datastore. Note I said “NFS datastore” and not just “datastore”, as NFS is their current focus. Why just NFS and not block storage? Currently that is because of the architecture they have chosen, or better said due to how they intercept traffic going to or coming from the datastore.

The Infinio virtual appliance enhances storage performance by caching IO. Their primary use case is to do caching in memory. So what does it look like? Basically every host in the cluster gets an Infinio appliance installed. This appliance has 2 vCPUs and 8GB of memory by default and from that memory a shared caching pool is created to accelerate read IO. (Yes there is a downside to using an appliance, read this article by Frank.) The nice thing is that this pool of memory is cluster wide deduplicated, considering though the appliance holds 8GB of memory that deduplication is a requirement if you ask me. (Just revealed at TFD is that the appliance will get deployed with 4, 8 or 16GB memory based on the amount of memory in the host.) The other key word here is “read IO”, for now Infinio Accelerator is a read cache solution, so no write back, but that might change in the future, who knows. The video below also mentions SSD caching, the Tech Field Day session revealed that this is something that is being worked on to be included in the future.

One thing where Infinio definitely excels is the installation / configuration process, and even the purchase options are simple. You download a simple installer, point it to your vCenter Server, do a couple of “next / next / finish” actions and that is that. You want to buy the product? It will be even easier then installing, just hit the website, grab your creditcard and that is it. Definitely something I always appreciate, companies keeping it simple.

One thing I want to call, I asked this question during the TFD broadcast, as that today there is no direct integration with vCenter Server or with VC Ops. In my opinion a missed opportunity, especially considering the product is focused on the virtualization market.

How do they compare to other caching solutions out there? Well that is difficult to say at the moment, if I can find the time and get some proper SSDs in my lab I might test and compare the various solutions at some point. If you ask me there are benefits to both SSD/Flash and “in memory” caching. What will determine their success is: how it is implemented (product quality), where they sit in the I/O stack, how resilient the solution is and what kind of caching they offer. As I said, maybe more in the future on this.

That is all about I can share for now, for some more details I suggest watching the 8 minute pitch by their Co-founder and CEO Arun Agarwal all the way at the bottom or the Tech Field Day introduction videos and deepdive.

When will it be available? The public beta is scheduled to be available around VMworld, and Infinio is aiming for a GA release in Q4 of 2013.

Tech Field Day – Introductions

Tech Field Day – Demo

Tech Field Day – Deepdive / How it works

8 Minute Pitch

Software Defined Storage – What are our fabric friends doing?

Duncan Epping · Jun 13, 2013 ·

I have been discussing Software Defined Storage for a couple of months now and I have noticed that many of you are passionate about this topic as well. One thing that stood out to me during these discussion is that the focus is on the “storage system” itself. What about the network in between your storage system and your hosts? Where does that come in to play? Is there something like a Software Defined Storage Network? Do we need it, or is that just part of Software Defined Networking?

When thinking about it, I could see some clear advantages of a Software Defined Storage Network, I think the answer to all of these is: YES.

  • Wouldn’t it be nice to have end-to-end QoS? Yes from the VM up to the array and including the network sitting in between your host and your storage system!
  • Wouldn’t it be nice to have Storage DRS and DRS be aware of the storage latency, so that the placement engine can factor that in? It is nice to have improved CPU/Memory performance, but when your slowest component (storage+network) is the bottleneck?
  • Wouldn’t it be nice to have a flexible/agile but also secure zoning solution which is aware of your virtual infrastructure? I am talking VM mobility here from a storage perspective!
  • Wouldn’t it be nice to have a flexible/agile but also secure masking solution which is VM-aware?

I can imagine that some of you are iSCSI or NFS users and are less concerned with things like zoning for instance but QoS end-to-end could be very useful right? For everyone a tighter integration between the three different layers (compute->network<-storage) from a VM Mobility would be useful, not just from a performance perspective but also from an operational complexity perspective. Which datastore is connected to which cluster? Where does VM-A reside? If there is something wrong with a specific zone, which workloads does it impact? There are so many different use cases for a tighter integration, I am guessing that most of you can see the common one: storage administrator making zoning/masking changes leading to a permanent device loss and ultimately your VMs hopelessly crashing. Yes that could be prevented when all three layers are aware of each other and integration would warn both sides about the impact of changes. (You could also try communicating to each-other of course, but I can understand you want to keep that to a bare minimum ;-))

I don’t hear too many vendors talking about this yet to be honest. Recently I saw Jeda Networks making an announcement around Software Defined Storage Networks, or at least a  bunch of statements and a high level white paper. Brocade is working with EMC to provide some more insight/integration and automation through ViPR… and maybe others are working on something similar, so far I haven’t see too much to be honest.

Wondering, what you guys would be looking for and what you would expect? Please chip in!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in