The Importance of Design Language Systems and Auto Layout

Most of us know Design Language Systems (DLS) as a collection of tech used to build apps for Android, iOS, or Salesforce. It’s the catalog of components where we get the iOS drawer, android buttons, salesforce cards, etc. However, too few of us have thought about how we can create engaging and efficient DLS for our clients. Tackling problems like the evolution of a catalog of components to a DLS that has efficiencies built in could speed up design process, cut down on redundancies, and provide value impact to organizations and designers alike.

In addition to creating DLS components, designers should also look into incorporating Figma’s Auto Layout components as an easy way to maintain alignment as components change size. Building Auto Layout into DLS cuts production time in half and yields high-fidelity mockups. Strive’s Technology Enablement experts and team of UX/UI designers come in doing just that, spending less time on production work and more time problem solving. Remember, the more time designers spend designing mockups, the more money and resources companies are paying.

Greater speed also means having the ability to quickly adjust to new requirements or iterating based on feedback. These changes are usually setbacks, not just for the designer, but for developers and other downstream teammates that depend on the mockups.

Lets take a look at how DLS + Auto Layout can speed up your design process:

Design Language Systems

How many times have you had to take out a component and then detach it, resize it, and then measure it to make sure it fits the grid? For those that don’t have a DLS at all, it’s even more nuanced. So, instead of doing either, we simply click on the text ‘label’ and type in the new label, ‘addresses’. Thanks to Auto Layout, the component resizes to fit the bigger text.

Design Language System

What if we want to change the state to make it look like ‘addresses’ has been selected? Well, with a set of prebuilt components, we can easily swap the unhighlighted, default navigation bar with the highlighted version. Note that even with swapping the components, the Auto Layout ensures that the text, padding, position, styling etc. are kept intact.

The next four GIF’s are an example of true time saving potential and how Strive partners with our clients to provide value.

Design Language Systems

Design Language Systems

Design Language SystemsDesign Language Systems

Design Language SystemsDesign Language Systems

If you work with data-heavy applications, you’re used to the extraneous work that revolves around creating or updating tables. Generally, this means resizing, measuring, and aligning every column, just to update a single table! Not to mention dealing with tables that are particularly long and take hours of your time. Instead, Strive has developed a component that allows designers to simply bring a table component out, edit a column to fit the styling using switches, and then copy & paste to create a table. In some of our even more advanced tables, we have developed columns where placeholders like currency, data, etc. has been prefilled.

It took us less than a minute to create… Can your DLS do that?

Interested in learning more?

Here at Strive, we take pride in our Technology Enablement practice, where we can assist you in your UX/UI needs. Our subject matter experts team up with you to understand your core business needs, while taking a deeper dive into your organization’s growth strategy. Click to learn more about our digital experience capabilities and how we can help.

Authored by Strive Technology Enablement Practice

Getting Change Management Right

Strategic initiative failure rates remain high but working with the right partner can yield success. As such, executives put tremendous resources into planning and implementing their transformative projects.

However, seasoned executives also know that the success of those projects rests on getting users to adapt to new technologies, new processes, and new ways of working as much as – if not even more so – than any other element of the endeavor.

Unfortunately, successful change remains elusive. The failure rate for all change initiatives has been stuck around 70% for the past two decades and remains there today. 1

Consider figures from Gartner, the tech advisory firm: Its research shows that only 34% of all organizational change initiatives are a clear success, while half are out-and-out failures. 2

Those figures tell only part of the story, though. Here at Strive Consulting, we’ve found that companies without internal Change Management teams generally experience even higher failure rates. Why? Because they have neither the deep knowledge, nor the experience and tools, to enable change.

As a result, these companies often use online tutorials that offer only highlights on the topic, or they rely on overly complex white papers that don’t provide guidance on tailoring a program to the organizations’ own unique needs.

Neither option delivers information on the concrete tools and techniques needed to effectively teach people how to work in new and different ways. Rather, they tend to focus on the psychology – how the end user feels about the changes – and share some generic guiding principles, such as the ‘importance of communication’.

In reality, Change Management is a specialized skill, and it is one that needs to be expertly adapted to each initiative and tailored to every organization to ensure success. Strive’s Change Management framework acknowledges that reality and brings together four critical elements that must be addressed for an organization to successfully navigate transformation.

Those four elements are:

  • Alignment and Engagement
  • Change Impact and Analytics
  • Communication
  • Readiness and Training

Our extensive experience in helping a broad range of clients steer their companies through change has allowed us to hone in on these key areas and build a Change Management framework that leverages each of them to the maximum effect. We’ll focus on five critical tools across three elements of our framework

Let’s look at the first element: Alignment and Engagement. This element ensures that we’re collaborating with the right people in the plan and that their goals and priorities are well understood. With our ‘Story for Change’ we ask five important questions: What is happening, why now, so what, how are we going to achieve this, and now what? Asking these questions and listening to responses from project leaders gives us and, more importantly, the organization a clear, precise understanding on where it wants to be at the end of the transformation. While collaborating with these same leaders, we group and assess different stakeholder cohorts on a 2×2 grid measuring one’s level of influence on success and one’s impact imposed. The Stakeholder Assessment is the backbone to tailoring change, considering that all cohorts are coming from very different starting points and have different roles within the broader future state.

Next, we’re looking at Change Impacts and Analytics. For this, Strive evaluates how someone’s responsibilities will change and by how much. With Change Analysis we document all unique impacts and map against the stakeholder cohorts, identifying whether groups will perceive the impact as positive, negative, or neutral. This lets us understand what users will feel about the changes they’re facing and develop the various engagement, communication, and training activities needed to build understanding, knowledge, and commitment. We also develop metrics that track adoption, so we can confirm success, as well as identify those cohorts who may need additional support.

In tandem, we’re planning necessary Communications. This is all about informing key stakeholders through integrated, targeted, and timely program messaging. It’s also about understanding how communication flows within an organization. We believe there must be a communication cascade strategy within any program undergoing change for it to successfully transform. So, top-level sponsors need to effectively communicate with their direct reports, and in turn those managers need to effectively convey messages to their teams. Moreover, a communication plan compliments this cascade of information for each audience. Communication timed appropriately, focused on the right message, and delivered via the right vehicle helps all parties understand the importance of transformation for the organization as a whole.

On top of all this, we evaluate Readiness and Training. While training is hyper-focused and can be niche, we’ll focus on readiness. Quantitative metrics showing before and after results tell a clear part of the story, but it is one-sided. Qualitative surveying helps leadership understand if, and by how much, do stakeholder cohorts and users understand why the change is taking place, are aware of the impacts to their day-to-day responsibilities, know where they go for resources, and believe the change is overall positive.

Now, none of these four framework elements works in isolation. Rather, we consider them all together. In fact, we factor them into the lifecycle of a broader Change Management approach, creating a timeline from start to go-live that includes markers along the way. This means planning, for example, what milestones should be achieved counting down from 90, 60, 30, 15, 7, and 1 day out.

The payoff for having a structured Change Management workstream is significant, with This alone shows the value of having a solid Change Management strategy in place and the importance of having a partner who can deliver such results.

Looking for sample deliverables? Or maybe a bit more information? Let’s Talk!  

Here at Strive, we take pride in our Management Consulting practice, where we can assist you in your initial digital product development needs, all the way through to completion. Our subject matter experts’ team up with you to understand your core business needs, while taking a deeper dive into your company’s growth strategy.

Have Your Data and Query It Too!

“Have your cake and eat it too.” How would it make sense to have cake and not be able to eat it? And yet, we have, for decades, had similar experiences with enterprise data warehouses. We have our data; we want to query it too!

Organizations spend so much time, effort, and resources building a single source of truth. Millions of dollars are spent on hardware and software and then there is the cleansing, collating, aggregating, and applying business rules to data. When it comes time to query… we pull data out of enterprise data warehouse and put it into data marts. There simply is never enough power to service everybody who wants to query the data.

With the Snowflake Data Cloud, companies of all sizes can store their data in one place – and every department, every team, every individual can query that data. No more need for the time, expense, effort, and delay to move data out of an enterprise data warehouse and into data marts.

The advance of the ‘data lake’ promised to be the place where all enterprise data could be stored. Structured, semi-structured, and unstructured data could be stored together, cost effectively. And yet, as so many soon found out – data ended up needing to be moved out to achieve the query performance desired. More data marts, more cost, and more time delay to get to business insight.

Snowflake solved this problem by separating data storage from compute. Departments and teams can have their own virtual warehouse, a separate query compute engine that can be sized appropriately for each use case. These query engines do not interfere with each other.  Your data science team can run massive and complex queries without impacting accounting team’s dashboards.

Snowflake does this having designed for the cloud from the ground up. A massively parallel processing database, Snowflake is designed to use the cloud infrastructure and services of AWS, quickly followed by Azure and GCP. Organizations get all the scalability promised by “Hadoop based Big Data” in an easy to use, ANSI Standard SQL data warehouse, that delivers the 5 V’s of big data (Volume, Value, Variety, Velocity and Veracity). Not to mention all of these benefits come with industry leading cost and value propositions.

Speaking of Variety… Snowflake has broken out of the “data warehouse” box and has become ‘The Data Cloud’. All your data types: structured, semi-structured and now, unstructured.  All your workloads: Data Warehouse, Data Engineering, Data Science, Data Lake, Data Applications, and Data Marketplace. You have the scalability in data volume and in query compute engines across all types of data and use cases.

With the Snowflake Data Cloud, you truly can have all your data and query it too. Extracting business value for all departments and all employees along the way.


Want to learn more about the Snowflake Data Cloud? 

Strive Consulting is a business and technology consulting firm, and proud partner of Snowflake, having direct experience helping our clients understand and monopolize the benefits the Snowflake Data Platform presents. Our team of experts can work hand-in-hand with you to determine if leveraging Snowflake is right for your organization. Check out Strive’s additional Snowflake thought leadership HERE.

About Snowflake

Snowflake delivers the Data Cloud – a global network where thousands of organizations mobilize data with near-unlimited scale, concurrency, and performance. Inside the Data Cloud, organizations unite their siloed data, easily discover and securely share governed data, and execute diverse analytic workloads. Join the Data Cloud at SNOWFLAKE.COM.

Contact Us

Why Choose Open-Source Technologies?

In 2022, almost every enterprise has some cloud footprint, especially around their data. These cloud platforms offer closed-source tools which, while offering many benefits, may not always be the best choice for some organizations. First and foremost, these proprietary services can be expensive. In addition to paying for the storage and compute needed to store and access data, you also end up paying for the software itself. You could also become locked into a multi-year contract, or you might find yourself locked into a cloud’s tech stack. Once that happens, it’s very difficult (and expensive) to migrate to a different technology or re-tool your tech stack. To put it simply, if you ever reach a roadblock your closed-source tool can’t solve, there may be no workarounds.

Since closed-source technologies can create a whole host of issues, open-source technologies may be the right choice. Open-source tech is not owned by anyone. Rather, anyone can use, repackage, and distribute the technology. Several companies have monetized open-source technology by packaging and distributing it in innovative ways. Databricks, for example, built a platform on Apache Spark, a big-data processing framework. In addition to providing Spark as a managed service, Databricks offers a lot of other features that organizations find valuable. However, a small organization might not have the capital or the use case that a managed service like Databricks aims to solve. Instead, you can deploy Apache Spark on your own server or a cloud compute instance and have total control. This is especially attractive when addressing security concerns. An organization can benefit from a tool like Spark without having to involve a third party and risk exposing data to the third party.

Another benefit is fine-tuning resource provisioning.

Because you’re deploying the code on your own server or compute instance, you can configure the specifications however you want. That way, you can avoid over-provisioning or under-provisioning. You can even manage scaling, failover, redundancy, security, and more. While many managed platforms offer auto-scaling and failover, it is never so granular as it is when you provision resources yourself.

Many proprietary tools, specifically ETL (Extract, Transfer, Load) and data integration tools, are no-code GUI based solutions that require some prior experience to be implemented correctly. While the GUIs are intended to make it easier for analysts and less-technical people to create data solutions, more technical engineers can find it frustrating. Unfortunately, as the market becomes more inundated with new tools, it can be difficult to find proper training and resources. Even documentation can be iffy! Open-source technologies can be similarly peculiar, but it’s entirely possible to create an entire data stack – data engineering, modeling, analytics, and more – all using popular open-source tech. These tools will almost certainly lack a no-code GUI but are compatible with your favorite programming languages. Spark supports Scala, Python, Java, SQL and R, so anyone who knows one of those skills can be effective using Spark.

But how does this work with cloud environments?

You can choose how much of the open-source stack you want to incorporate. A fully open-source stack would simply be running all your open-source data components on cloud compute instances: database, data lake, ETL, data warehouse, and analytics all on virtual machine(s). However, that’s quite a bit of infrastructure to set up, so it may make sense to unload some parts to cloud-native technologies. Instead of creating and maintaining your own data lake, it would make sense to use AWS S3, Azure Data Lake Storage gen2, or Google Cloud Storage. Instead of managing a compute instance for a database, it would make sense to use AWS RDS, Azure SQL DB, or Google Cloud SQL and use an open-source flavor of database like MySQL or MariaDB. Instead of managing a Spark cluster, it might make sense to let the cloud manage the scaling, software patching, and other maintenance, and use AWS EMR, Azure HDInsight, or Google Dataproc. You could also abandon the idea of using compute instances and architect a solution using a cloud’s managed open-source offerings: AWS EMR, AWS MWAA, AWS RDS, Azure Database, Azure HDInsight, GCP’s Dataproc and Cloud Composer, and those are just data-specific services. As mentioned before, these native services bear some responsibility for maintaining the compute/storage, software version, runtimes, and failover. As a result, the managed offering will be more expensive than doing it yourself, but you’re still not paying for software licensing costs.

In the end, there’s a tradeoff.

There’s a tradeoff between having total control and ease of use, maintenance, and cost optimization, but there is a myriad of options for building an open-source data. You have the flexibility to host it on-premises or in the cloud of your choice. Most importantly, you can reduce spend significantly by avoiding software licensing costs.


Interested in Learning More About Open-Source Technologies? 

Here at Strive Consulting, our subject matter experts’ team up with you to understand your core business needs, while taking a deeper dive into your organization’s growth strategy. Whether you’re interested in modern data integration or an overall data and analytics assessment, Strive Consulting is dedicated to being your partner, committed to success. Learn more about our Data & Analytics practice HERE.

Contact Us

5 Key Concepts in Design Thinking for Visual Analytics

Design Thinking for visual analytics is a proven framework that puts the needs of end users at the forefront of development, enabling organizations to fail-fast, iterate, and design analytical solutions that can scale with excellence and sustain with improvement. Design thinking enables organizations to introduce agile ways of working, data fluency, value creation, and storytelling with data.

Some key concepts involved in Design Thinking:

  • Visualization & KPIs
  • Personas
  • User Journey Mappings
  • Conceptual Data Modeling
  • Wire-framing/Prototyping

Visualization & KPIs  

Visual analytics are essential for enabling users to take action and make data-driven decisions through insights and storytelling. Although visualization is at the forefront of most reporting products, there is a broad spectrum of needs and analytics use cases across any business, all of which are important. Visualizations are great, but a visualization is only effective if there is clear alignment on KPIs and how they can be leveraged. Developing KPIs is both an art and a science. The objective is to identify measures that can meaningfully communicate accomplishment of key goals and drive desired behaviors. Every KPI should relate to a specific business outcome with a performance measure. Unfortunately, KPIs are often confused with business metrics. Although often used in the same spirit, KPIs need to be defined according to critical or core business objectives, while metrics can provide insights that may feed into KPIs. KPIs and metrics can be at the organizational level and trickle down to other functional areas, as seen in the example below with Sales. Therefore, defining personas is a good exercise to understand the different needs of users across an organization.

Visualization & KPIs

Persona Development 

What is a Persona?

A persona is a fictional character, rooted in research, that represents the needs and interests of your customer. It is created to represent a segment of users that might leverage a product in a similar way. Personas facilitate visualization of the user and create empathy with users throughout the design process.

Why do we want Personas?

Developing personas helps gain an understanding for how different users within an organization leverage analytics. This is integral in designing user-centric applications that are organized by user’s needs with relevant content, combined from different sources.

What is a good persona?

A good persona accurately represents the user experience for a single or group of users taking into consideration the user needs to formulate requirements, design context, and complexity of situational behaviors.

By developing personas and understanding the needs of users, we can leverage different approaches to design analytics to guide the user through their desired experience. Whether it’s creating actionable KPIs to measure performance/progress, or enabling the user to self-serve through a guided analytics experience, understanding their analytical needs will help drive the design of the solution.

User Journey 

One of the underlying principles of design thinking is putting the user’s needs first when designing & developing applications. Understanding the empathy of the user by mapping moments of frustration and delight throughout their analytical journey will help formulate the best experience possible.

Conceptual Data Model  

A conceptual data model is a visual representation of concepts and rules that convey the makeup of a system or organization. Conceptual data models are key for defining, organizing, and prioritizing concepts and informational needs of the business.


Humans are innately visual creatures and often struggle to articulate their needs. Wireframing and prototypes are visual representation that define the experience with reporting and analytics and will visually depict the requirements or needs of a user in preparation for development.

What is it good for?

  • Makes Things Tangible: Helps with visualizing the concept and engaging stakeholders around a product vision
  • Enables Collaboration: Customer/User feedback can be taken into consideration before development begins.
  • Saves Time: Increases the speed of shared understanding and provide guidance to the development team
  • Supports User Testing: Supports usability test iterations to get insight from actual users of the product.

What is the process of wireframing?

A good iterative design process increases fidelity at each step to get closer to the final product that satisfies the needs of a user.

Helpful Tips for wireframing:

  • You don’t have to be an artist.
  • Keep it simple – Sketches, wireframes are meant to convey information.
  • Short Sharp Annotations – Drawings and sketches help articulate ideas, but annotations – explanations – callouts are necessary to explain functionalities and concepts.
  • Encourage Feedback – Feedback is necessary to iterate, refine, improve the design and engage stakeholder around the product vision.


Design thinking is a framework that can be applied to almost every user-centric application. The biggest value an organization can recognize by instilling design thinking principles is understanding the needs and empathy of users as they begin to adopt analytics to enable a data-driven culture. If you’re curious how design thinking can be applied to your organization’s visual analytics and products, Strive has proven, strategic offerings to help you achieve your desired goals.

Interested in Design Thinking? What about Data & Analytics?

You’re in luck! Strive Consulting helps companies compete in a data-driven world. We turn information into insight through powerful analytics and data engineering and our Data & Analytics specialists create new opportunities for our clients out of untapped information tucked away across your business. Whether it’s capturing more market share or identifying unmet customer needs, effectively mining, and utilizing the data available to you will help you make faster, more informed decisions and keep pace with today’s rapidly changing business world. Click here to learn more about our D&A practice.


Practical Microservices: User Authentication

Unless you’re explicitly in the identity management or security industry – you probably shouldn’t bother building your own user management tools.

We’ve seen a growing interest in microservice architecture from our clients over the past few years. Likewise, we find ourselves recommending a microservice approach for an ever expanding list of categories. Seeing this shift as an opportunity, a number of products and services have emerged that seek to bootstrap development in a microservice context.

The benefits of a microservice approach are myriad. Today, I’d like to talk about one of those benefits that product managers ought to consider – the flexibility to let third-party services fill in your requirements. We’ll examine a typical development roadmap for user management support and compare that to using a services such as Okta or Auth0 to fill those needs.

Common Requirements

Let’s say you’re the product owner tasked with planning the roadmap for a new product. As usual, a user management feature is among the epics. For a fairly straightforward user management workflow, your team might be given base requirements approximately along these lines.

  • Safely store and manage a list of users.
  • Ability to manage, edit and admin that user list.
  • Access control and/or user roles.
  • Secure method for storing password data.
  • Reliable method of generating new passwords.
  • Enforcement of password complexity rules.
  • Credential reset workflow via email / SMS / push notifications.
  • Regulatory compliance (HIPAA, KYC)
  • Login forms / Authentication workflows
  • SSO not included (budget)
  • 2FA not included (budget)

Some Shaky Deliverables

Experienced scrum master or product owner may already be feeling uneasy. Here we have a set of base requirements that – depending on team size and capability – can eat up large portions of your timeline. What may have felt like a simple ask becomes weeks or months of work. If all goes well, you are left with these burdens:

  • Security and maintenance overhead for all of the above.
  • Liability for all of the above, potential civil/legal liability. 
  • Repetitional and fiduciary  risks from all of the above.
  • Limited ability to evaluate actuary risk from the above.
  • Inconsistent user experience. 
  • Lower conversion rates as a result.
  • Increased developer costs.

Oh, and don’t forget that users need to learn the nuance of your system… and that’s a great way to shed users before they ever become active.

Tweet from @_jayphelps: "It should be illegal to prevent pasting into a password input field."
Don’t set yourself up for this kind of user feedback.

A Practical Alternative

Unless you’re explicitly in the identity management or security industry – or your application architecture prohibits it – you probably shouldn’t be building your own user management tools. As I mentioned at the top; it might better to forego all of this and instead implement a low-code solution from a third party.

Enter Auth0

In March of 2021, Okta announced they would acquire Auth0 for $6.5 billion – expanding their suite of enterprise identity tools. At its core, Auth0 calls itself “a user authentication and authorization platform”. They provide an extensive set of functionality that allows you to integrate with hundreds of identity and access management providers. What’s more, Auth0 offers an excellent set of SDKs developer APIs which make integrating all these features into your app dead simple.

“Simple documentation about how to integrate – 15-minutes and we were up and running with a proof of concept.”

It’s true, I can testify from personal experience. When I first tried working with Auth0 – I was able to establish a new account, skim the documentation, integrate with Strive’s internal directory and implement full user support in a React application without breaking a sweat. It was just as straightforward to do this in iOS with Swift and in Spring Boot. In just an afternoon I was able to put together prototypes on all three of these platforms.

Just think about that for a moment. There’s a version of this timeline where an entire scrum team takes multiple sprints to implement user management for a single platform. With Auth0 it took about 45 minutes!

Building A Business Cases

At this point, you may already have a clear picture of the business case for using a provider like Auth0 rather than rolling your own user management. We understand that it can be difficult to gain support from fellow stakeholders for even the strongest ideas. With that in mind, here are a few bullet points you can steal:

  • Faster time to market
  • Reduced cost of development
  • Reduced complexity of development
  • Reduced fixed operating overhead
  • Reduced likelihood of defects / liability therein
  • Increased user registration / conversion

Wrapping Up

Ultimately this is about creating as much value as possible from limited resources. Engineering work is expensive, and developers are a finite resource. Since software companies often live and die by the efficacy of their developers, it’s critical that you allocate their efforts effectively.

Based on decades of combined experience while serving startups, Fortune 500 and everywhere in between. Generally, Strive believes that user authentication is a solved problem. Rather than dedicate resources to re-solve it, we find that it’s often wisest to instead stand on the shoulders of giants.

Further Reading


My Strive Story: Caryn Conde

Strive Consulting wouldn’t be the company it is today without the inspiring group of employees who make up our workforce. We pride ourselves on hiring individuals who bring a diverse group of skills and backgrounds to the table, which set our organization up for success.

Throughout our ‘My Strive Story’ Interview platform, we sit down with members from our different teams within the organization to find out where they came from, why they chose Strive, and what makes their experience here special.

Tell us about yourself.

Let me first tell you that I am a Senior Consultant in Strive’s Management Consulting practice based in Dallas, Texas. I’ve been in consulting for almost a decade, with 2 of those years spent at Strive. Now, on to the important things… I was born in San Diego, California, went to school in Arizona, and called Dallas my home starting in 2012. I’m a travel lover, taco worshiper, and I dare you to find someone who loves their dog as much as I love mine. I spend most of my time playing tourist with my pup, and since Covid, trying out the various streaming apps (admittedly for hours at a time), while curating my Japanese whisky collection. 

What attracted you to Strive?

Most consultants are drawn to this field with a promise of travel, dining on a corporate budget, and the variety of work. Coming out of college, I couldn’t think of a better line of work.  

That being said, I have worked as a consultant for many years (maybe not as many as some) and while I still love what I do, my priorities have shifted. I began looking for a company that fit with my values now: spending more time at home, relationships with people, and connectivity to my local community.  

When Strive and I began our courtship, I was drawn to three things in particular: the local delivery model, company culture, and the overall opportunity available. The local delivery model meant I could work and live in the same city, and as I begin to build my family, this has become more of a priority for me. The company culture was present in every interview and employee I spoke with. Every employee was excited, driven, and genuine, from HR rep to CEO. Strive empowers their employees to voice opinions, share thoughts, and to own and drive change. As Strive continues to grow, the opportunity is endless.  

What keeps you at Strive?

Strive has not lost sight of the things that drew me to it those few years ago, even with the growth they have had. With other firms I have been a part of, this is not always the case. You tend to feel lost in the crowd of larger companies. Strive’s leadership continues to prioritize their employees and maintain the culture they have built when they first began. They are connected to their employees in a way you don’t see in other firms. Want to start a community outreach program? Let’s talk about it. Want to learn about a specific technology? Let’s talk about it. Want to make changes to an existing process because you think there may be a better way? Let’s talk about it. They listen to their employees, pivot when something isn’t working, and actively work to provide its employees with a platform to reach their potential. 

What makes Strive stand above the rest?

Now I have been drinking the “Strive Kool-Aid” for a couple of years now, so I may be a bit biased. However, I truly believe Strive has figured out a way to make a company feel more like a family. I work with a great group of people, who are authentic, positive, and passionate everyday. A big part of enjoying the work you do, in the office or on the client site, is enjoying the people you get to work and build with. This shines through when we engage with our clients and continue to build lasting relationships with them. Strive has carefully cultivated a work environment that I am proud to be a part of.

What are you looking forward to in the future?

With the Covid clouds parting, I am excited for Strive’s growth, my own professional growth, and our market growth. As the Dallas office grows, and consultants are added to the practice, I am excited to see how they will come in and help shape Strive, making their own mark in the company. It’s an exciting time at Strive and an awesome thing to be a part of. 

Interested in joining Strive?

Here at Strive Consulting, we foster an active, innovative culture, providing the coaching, mentoring, and support our employees need to work at the top of their game and succeed personally and professionally. Check out our Careers page for open roles and opportunities within Strive. We’re hiring!

How Snowflake Saved My Life

Let me tell you a story of how the Snowflake Data Cloud saved my life. I know it sounds dramatic, but just hear me out.

A few years ago, I worked with a multi-billion-dollar wholesale distributor that had never implemented a data warehouse. Their main goal? Consolidate all data into one location and enable KPI roll-ups from across their disparate systems. However, in this case, they did not want to invest in additional licensing. So, my team set about building a traditional data warehouse leveraging their current platform, SQL Server. Initially, it was a successful four-layer architecture with Staging, Consolidation, Dimensional Model, and Tabular Cubes, with the end visualization solution being Power BI… but within a few months, issues began to surface.

The number of sources feeding into this platform had increased dramatically and this increase started to impact load times. Initially, the batch load processes were running between two and three hours, but over time increased to taking 5, 6, sometimes 7 hours to run! We needed a long-term solution, but in the short term, keep the platform running to deliver data to the organization.

What we were experiencing were challenges with Constraints, Indexing, Locks, Fragmentation, etc… To mitigate these issues, I personally took the step of waking up every morning at 3:00AM to log in and ensure certain process milestones successfully completed in a timely manner. If those milestones were not achieved, the batch process would either stall, fail, or run excessively long and the last thing I wanted was to explain to the business why they were not going to have data until 9:00, 10:00, 11:00AM. After a couple weeks of doing this, it became apparent – we needed a better solution, and fast!

In the past, I had some experience with Big Data platforms, but decided to research options outside of established technologies, such as Cloudera or Hadoop-based solutions and instead looked into something new – Snowflake. Snowflake is the world’s largest and most efficient data management platform, where organizations can access, share, and maintain their data, so I thought, why not? Let’s give it a shot!

We set up a proof of concept initially trying to mimic the 4-layer architecture we had set up in SQL Server. After seeing limited success, as well as being laughed at for even trying it, we took a step back, reevaluated our approach, and flipped the architecture from ‘Extract Transform Load’ toward ‘Extract Load Transform’.. And… Eureka! With this change, we were able to reduce overnight batch runtimes from the 5, 6, 7 hour SQL Server to less than 20 minutes. In fact, our average runtimes for our load processes were around 17 minutes, but now I’m just showing off.

Not only did this have an incredible effect on our ability to deliver data in a timely manner, but it also enabled an increase in the frequency in which we processed data. You see, with the SQL Server we were never able to update data more than once a day, but with Snowflake, we could run the batch process every 20 minutes and quickly deliver requested changes to the models, measures, and dimensions.

The implementation process went from taking weeks to taking days, or even hours, resulting in some very happy stakeholders. With these results, coupled with the fact that I no longer had to wake up at 3:00AM to verify successful batch processes…Snowflake truly saved my life.

Want to learn more about the Snowflake Data Cloud? 

Strive Consulting is a business and technology consulting firm, and proud partner of Snowflake, having direct experience helping our clients understand and monopolize the benefits the Snowflake Data Platform presents. Our team of experts can work hand-in-hand with you to determine if leveraging Snowflake is right for your organization. Check out Strive’s additional Snowflake thought leadership here.

About Snowflake

Snowflake delivers the Data Cloud – a global network where thousands of organizations mobilize data with near-unlimited scale, concurrency, and performance. Inside the Data Cloud, organizations unite their siloed data, easily discover and securely share governed data, and execute diverse analytic workloads. Join the Data Cloud at


My Strive Story: Charles Cabel

Strive Consulting wouldn’t be the company it is without the inspiring group of employees who make up our workforce. We pride ourselves on hiring individuals who bring a diverse group of skills and backgrounds to the table, which set our organization up for success.

Throughout our ‘My Strive Story’ Interview platform, we sit down with members from our different teams within the organization to find out where they came from, why they chose Strive, and what makes their experience special.

Tell us about yourself.

First and foremost, to my fellow Veterans, we answered the call and served our country honorably. Every day, but more so on November 11th, I’ll hold my head up high, knowing that I’m part of such an esteemed group of individuals.

In addition, to my Army brothers and sisters from the recently decommissioned 67th Signal Battalion, thank you for being my battle buddies. Our accomplishments were great; the camaraderie was even greater.

Born in Chicago and raised in the Northwest Suburbs, I’m about as Chicagoan as the next guy. I prefer to call buildings by their original names (i.e., Comisky, Sears Tower, and the Rosemont Horizon). Give me a Chicago-style hotdog, but hold the sport peppers, or hit me with a Maxwell Street Polish any day of the week. If you want something truly Chicago, go with the giardiniera – no other city does it better.

I’ve been a consultant since 2008 and partnered with multiple organizations, across many industries, implementing IT solutions. I enjoy the challenge of bringing my client’s visions to fruition. More than that, I enjoy establishing a team dynamic between my clients and my colleagues.

What attracted you to Strive?

Before joining Strive, I was on the road for a significant amount of time. Strive’s local engagement model allows me to be with my family at home, instead of calling in via FaceTime from a hotel room. The quality of projects, diverse client base, and dynamic pace of work never falters and is still a huge part of who Strive is, but with the added benefit of staying out of airport check-in lines.

The size of Strive was also a huge factor in me joining. While Strive is growing at a rapid rate, we still have a close, tight-knit employee base, which is seen in the structure of our internal teams. I personally prefer smaller teams, where I can get to know my colleagues and have the opportunity to work together on challenging engagements – which is exactly how Strive operates.

What keeps you at Strive?

The challenging engagements. Each engagement has presented me with an opportunity for career growth and professional development. Whether it is a new role, new responsibilities, or new technologies, I’ve been able to have unique and fulfilling experiences, all while enjoying the people on my teams, both internally and externally.

What makes Strive stand above the rest?

Our ability to respond to change. We’re at a size that allows us to react quickly to changes and pivot accordingly, ensuring our decisions are correct for our clients and colleagues. This applies to the engagements we work on and how we work and interact.

What are you looking forward to in the future?

We’re a part of an interesting time; the post-pandemic landscape continues to change how we operate. I’m looking forward to seeing the response to the ever-changing “work” dynamic. Whatever the world brings, I do know that at Strive, we will successfully adapt and overcome.

Interested in joining Strive?

Here at Strive Consulting, we foster an active, innovative culture, providing the coaching, mentoring, and support our employees need to work at the top of their game and succeed personally and professionally. Check out our Careers page for open roles and opportunities within Strive. We’re hiring and we’re dedicated to being your partner, committed to success.

Snowflake: A Data Engineering Tool

Snowflake began as the best cloud storage tool. It set out to simply store data, while also providing the infinite scalability of the cloud. Right out of the gate it offered incredible flexibility with its decoupled storage and compute. You could model your data in a Kimball dimensional model, a Data Vault, or even a loosely structured data lake all in the same instance. It also handled table clustering, auto scaling, security, caching, and many other features most data engineers and analysts don’t want to worry about. However, it fell short when it came to data extraction and transformation. Other platforms like Redshift, BigQuery, and Synapse integrated so well with their respective cloud data stacks, making data engineering and processing as simple as it gets. Snowflake, on the other hand, has three options for loading and transforming data.

Option 1: Snowpipe and Stages

Snowflake has a proprietary continuous data loading solution called Snowpipe. It uses named Snowflake objects called ‘pipes’, which contain a COPY statement to take data from a stage and load it into a table. A stage is a location where data files are stored in the cloud. These can be internal (within Snowflake itself) or external (on a cloud platform like AWS S3 or Azure Data Lake Storage). Snowpipe can be called by a public REST API endpoint or by using a cloud event messaging service like AWS SNS.

For example:

  1. Log files are uploaded to an AWS S3 bucket
  2. AWS SNS or S3 Event Notifications notify Snowpipe new data is available
  3. Snowpipe grabs the new data and loads it into the target table

There are numerous benefits to this approach:

  • It’s serverless, so you don’t need to configure or monitor resources.
  • It’s continuous, so it’s a good approach for constantly flowing data or small incremental loads.
  • You’re only billed for the resources Snowflake uses to move the data, as opposed to the large billing model for virtual warehouses.
  • If you already have a cloud data lake, this is a simple way to migrate data into Snowflake using native technology.

However, Snowpipe may not be appropriate for the following reasons:

  • Snowpipe alone doesn’t solve Change-Data-Capture (CDC), which requires the use of streams
  • Snowpipe requires additional cloud resources:
    • Cloud data store
    • Event messaging service
    • IAM
    • Snowflake file format and copy options
  • Integration between your Snowflake instance and your cloud instance is required.
  • File size, queuing, etc.

Option 2: Third-Party ETL Tools

From the beginning, Snowflake created a robust partner network and integrations with some of the most popular ETL tools in the market. The ETL tools also create native integrations with Snowflake and have simplified the process of extracting, transforming, and loading data from various sources into Snowflake. While each ETL tool is different, they should all be able to offer the following:

  • One single tool for extraction, transformation, and loading of data
  • Native CDC capabilities
  • Orchestration and automation of ETL/ELT processes
  • Less setup than using Snowpipe and external stages
  • No-code solutions for data transformations

There are many use cases where a third party ETL tool is the right choice, and when implemented successfully, can save time and money for data engineering teams. There are also reasons not to use third party tools:

  • Price for proprietary software can be very expensive
  • Some tools are not fully managed, meaning your team will have to set up the cloud infrastructure to serve the ETL tool
  • Potentially less scalability than Snowflake
  • Difficult to switch from one tool to another
  • Each tool requires additional skills in order to implement effectively
  • Continuous, near-real-time, and real-time loading is immature or nonexistent

Other Options

There are other options for loading data into Snowflake:

  1. BULK loading via COPY command
  2. Data loading wizard in the Snowflake UI

The data loading wizard allows you to load data directly into tables with no actual code. This is intended for smaller datasets and in rare circumstances and should not be used for regular data loading. BULK loading via the COPY command allows you to manually load data by staging it and then copying it to a table. This requires a running virtual warehouse. This too should not be used for regularly scheduled loads or any kind of volatile or business-critical data.

The Current State of Data Engineering in Snowflake

While Snowflake started off as a cloud storage and querying platform, competition has forced it to create more data engineering capabilities. Platforms like Databricks are starting to encroach on data storage, and Snowflake has responded by adding new features:

Stored Procedures

Stored procedures allow you to execute procedural logic. Procedures can take parameters and execute tasks like DML actions. As of October 2021, stored procedures are only available in Javascript.


Tasks are named objects that execute a SQL statement at a scheduled time. The emphasis is on the scheduled aspect, as this is the first semblance of orchestration in Snowflake. A task will use a virtual warehouse to run a SQL statement, such as calling a stored procedure. In other words, you can combine tasks and stored procedures to create data pipelines!

Because tasks are named database objects, you can use them in third party ETL solutions as you would a table or view. Certain limitations may apply depending on the tool.


These are the last piece of the puzzle for creating a data engineering environment. Streams are named objects created off an existing table. A stream will have the same columns as the table, but with three additional columns: METADATA$ACTION, METADATA$ISUPDATE, and METADATA$ROW_ID.

  • METADATA$ACTION specifies if the row is an insert or delete
  • METADATA$ISUPDATE specifies if the insert or delete is an update
  • METADATA$ROW_ID specifies the unique and immutable row for the table.

With these three columns, you can implement Change-Data-Capture on tables. This article shows how to build a relatively simple Type II slowly changing dimension and all the upsert logic associated with it.

Streams can provide an easier alternative to stored procedures for processing Change-Data-Capture. In a traditional Kimball architecture, this would be best used in the ODS layer and for Type II slowly changing dimensions in the presentation layer. Instead of having to program a stored procedure in Javascript, you can combine a stream to provide the table history and a view to handle the insert/update/delete logic. However, a stream only applies to a single table. Streams are not appropriate for ETL processes that require complex joins and aggregations like you would typically use in a staging layer for fact and dimension tables.

Streams and stored procedures still need to be used together for CDC, but the process is simplified. Instead of comparing the ODS table with the new raw data, the stream will determine the table changes for you. The stream and corresponding view that manages the upsert logic are used in a simple stored procedure to expired changed and deleted records, insert new and current changed records, and append metadata to all records.

Bringing it all together

With these native Snowflake objects, data processing no longer requires third party tools. Achieving a robust persistent staging layer with auditability is now possible with streams, and when chained with tasks and called by stored procedures, the entire process can be automated. Pipes can continuously bring new data into Snowflake, so your data will never be stale. The data orchestration and processing components that were previously in a third-party tool now resides entirely within Snowflake, and the code is entirely SQL.

Interested in Learning More about Snowflake as a Data Engineering Tool?

Strive Consulting is a business and technology consulting firm, and proud partner of Snowflake, having direct experience with query usage and helping our clients understand and monopolize the benefits the Snowflake Data Platform presents. Our team of experts can work hand-in-hand with you to determine if leveraging Snowflake is right for your organization. Check out Strive’s additional Snowflake thought leadership here.

About Snowflake

Snowflake delivers the Data Cloud – a global network where thousands of organizations mobilize data with near-unlimited scale, concurrency, and performance. Inside the Data Cloud, organizations unite their siloed data, easily discover and securely share governed data, and execute diverse analytic workloads. Join the Data Cloud at