Panic hits when you mistakenly delete data. Problems can come from a mistake that disrupts a process, or worse, the whole database was deleted. Thoughts of how recent was the last backup and how much time will be lost might have you wishing for a rewind button. Straightening out your database isn't a disaster to recover from with Snowflake's Time Travel. A few SQL commands allow you to go back in time and reclaim the past, saving you from the time and stress of a more extensive restore.

We'll get started in the Snowflake web console, configure data retention, and use Time Travel to retrieve historic data. Before querying for your previous database states, let's review the prerequisites for this guide.

Prerequisites

  • Quick Video Introduction to Snowflake
  • Snowflake Data Loading Basics Video

What You'll Learn

  • Snowflake account and user permissions
  • Make database objects
  • Set data retention timelines for Time Travel
  • Query Time Travel data
  • Clone past database states
  • Remove database objects
  • Next options for data protection

What You'll Need

  • A Snowflake Account

What You'll Build

  • Create database objects with Time Travel data retention

First things first, let's get your Snowflake account and user permissions primed to use Time Travel features.

Create a Snowflake Account

Snowflake lets you try out their services for free with a trial account . A Standard account allows for one day of Time Travel data retention, and an Enterprise account allows for 90 days of data retention. An Enterprise account is necessary to practice some commands in this tutorial.

Login and Setup Lab

Log into your Snowflake account. You can access the SQL commands we will execute throughout this lab directly in your Snowflake account by setting up your environment below:

Setup Lab Environment

This will create worksheets containing the lab SQL that can be executed as we step through this lab.

setup_lab

Once the lab has been setup, it can be continued by revisiting the lab details page and clicking Continue Lab

continue_lab

or by navigating to Worksheets and selecting the Getting Started with Time Travel folder.

worksheets

Increase Your Account Permission

Snowflake's web interface has a lot to offer, but for now, switch the account role from the default SYSADMIN to ACCOUNTADMIN . You'll need this increase in permissions later.

account-role-change-image

Now that you have the account and user permissions needed, let's create the required database objects to test drive Time Travel.

Within the Snowflake web console, navigate to Worksheets and use the ‘Getting Started with Time Travel' Worksheets we created earlier.

Create Database

Snowflake_TT_CreateDB-image

Use the above command to make a database called ‘timeTravel_db'. The Results output will show a status message of Database TIMETRAVEL_DB successfully created .

Create Table

Snowflake_TT_CreateTable-image

This command creates a table named ‘timeTravel_table' on the timeTravel_db database. The Results output should show a status message of Table TIMETRAVEL_TABLE successfully created .

With the Snowflake account and database ready, let's get down to business by configuring Time Travel.

Be ready for anything by setting up data retention beforehand. The default setting is one day of data retention. However, if your one day mark passes and you need the previous database state back, you can't retroactively extend the data retention period. This section teaches you how to be prepared by preconfiguring Time Travel retention.

Alter Table

Snowflake_TT_AlterTable-image

The command above changes the table's data retention period to 55 days. If you opted for a Standard account, your data retention period is limited to the default of one day. An Enterprise account allows for 90 days of preservation in Time Travel.

Now you know how easy it is to alter your data retention, let's bend the rules of time by querying an old database state with Time Travel.

With your data retention period specified, let's turn back the clock with the AT and BEFORE clauses .

Use timestamp to summon the database state at a specific date and time.

Employ offset to call the database state at a time difference of the current time. Calculate the offset in seconds with math expressions. The example above states, -60*5 , which translates to five minutes ago.

If you're looking to restore a database state just before a transaction occurred, grab the transaction's statement id. Use the command above with your statement id to get the database state right before the transaction statement was executed.

By practicing these queries, you'll be confident in how to find a previous database state. After locating the desired database state, you'll need to get a copy by cloning in the next step.

With the past at your fingertips, make a copy of the old database state you need with the clone keyword.

Clone Table

Snowflake_TT_CloneTable-image

The command above creates a new table named restoredTimeTravel_table that is an exact copy of the table timeTravel_table from five minutes prior.

Cloning will allow you to maintain the current database while getting a copy of a past database state. After practicing the steps in this guide, remove the practice database objects in the next section.

You've created a Snowflake account, made database objects, configured data retention, query old table states, and generate a copy of the old table state. Pat yourself on the back! Complete the steps to this tutorial by deleting the objects created.

Snowflake_TT_DropTable-image

By dropping the table before the database, the retention period previously specified on the object is honored. If a parent object(e.g., database) is removed without the child object(e.g., table) being dropped prior, the child's data retention period is null.

Drop Database

Snowflake_TT_DropDB-image

With the database now removed, you've completed learning how to call, copy, and erase the past.

Time Travel snowflake: The Ultimate Guide to Understand, Use & Get Started 101

By: Harsh Varshney Published: January 13, 2022

Related Articles

snowflake time travel setting

To empower your business decisions with data, you need Real-Time High-Quality data from all of your data sources in a central repository. Traditional On-Premise Data Warehouse solutions have limited Scalability and Performance , and they require constant maintenance. Snowflake is a more Cost-Effective and Instantly Scalable solution with industry-leading Query Performance. It’s a one-stop-shop for Cloud Data Warehousing and Analytics, with full SQL support for Data Analysis and Transformations. One of the highlighting features of Snowflake is Snowflake Time Travel.

Table of Contents

Snowflake Time Travel allows you to access Historical Data (that is, data that has been updated or removed) at any point in time. It is an effective tool for doing the following tasks:

  • Restoring Data-Related Objects (Tables, Schemas, and Databases) that may have been removed by accident or on purpose.
  • Duplicating and Backing up Data from previous periods of time.
  • Analyzing Data Manipulation and Consumption over a set period of time.

In this article, you will learn everything about Snowflake Time Travel along with the process which you might want to carry out while using it with simple SQL code to make the process run smoothly.

What is Snowflake?

Snowflake is the world’s first Cloud Data Warehouse solution, built on the customer’s preferred Cloud Provider’s infrastructure (AWS, Azure, or GCP) . Snowflake (SnowSQL) adheres to the ANSI Standard and includes typical Analytics and Windowing Capabilities. There are some differences in Snowflake’s syntax, but there are also some parallels. 

Snowflake’s integrated development environment (IDE) is totally Web-based . Visit XXXXXXXX.us-east-1.snowflakecomputing.com. You’ll be sent to the primary Online GUI , which works as an IDE, where you can begin interacting with your Data Assets after logging in. Each query tab in the Snowflake interface is referred to as a “ Worksheet ” for simplicity. These “ Worksheets ,” like the tab history function, are automatically saved and can be viewed at any time.

Key Features of Snowflake

  • Query Optimization: By using Clustering and Partitioning, Snowflake may optimize a query on its own. With Snowflake, Query Optimization isn’t something to be concerned about.
  • Secure Data Sharing: Data can be exchanged securely from one account to another using Snowflake Database Tables, Views, and UDFs.
  • Support for File Formats: JSON, Avro, ORC, Parquet, and XML are all Semi-Structured data formats that Snowflake can import. It has a VARIANT column type that lets you store Semi-Structured data.
  • Caching: Snowflake has a caching strategy that allows the results of the same query to be quickly returned from the cache when the query is repeated. Snowflake uses permanent (during the session) query results to avoid regenerating the report when nothing has changed.
  • SQL and Standard Support: Snowflake offers both standard and extended SQL support, as well as Advanced SQL features such as Merge, Lateral View, Statistical Functions, and many others.
  • Fault Resistant: Snowflake provides exceptional fault-tolerant capabilities to recover the Snowflake object in the event of a failure (tables, views, database, schema, and so on).

To get further information check out the official website here . 

What is Snowflake Time Travel Feature?

Snowflake Time Travel: chart

Snowflake Time Travel is an interesting tool that allows you to access data from any point in the past. For example, if you have an Employee table, and you inadvertently delete it, you can utilize Time Travel to go back 5 minutes and retrieve the data. Snowflake Time Travel allows you to Access Historical Data (that is, data that has been updated or removed) at any point in time. It is an effective tool for doing the following tasks:

  • Query Data that has been changed or deleted in the past.
  • Make clones of complete Tables, Schemas, and Databases at or before certain dates.
  • Tables, Schemas, and Databases that have been deleted should be restored.

As the ability of businesses to collect data explodes, data teams have a crucial role to play in fueling data-driven decisions. Yet, they struggle to consolidate the data scattered across sources into their warehouse to build a single source of truth. Broken pipelines, data quality issues, bugs and errors, and lack of control and visibility over the data flow make data integration a nightmare.

1000+ data teams rely on Hevo’s Data Pipeline Platform to integrate data from over 150+ sources in a matter of minutes. Billions of data events from sources as varied as SaaS apps, Databases, File Storage and Streaming sources can be replicated in near real-time with Hevo’s fault-tolerant architecture. What’s more – Hevo puts complete control in the hands of data teams with intuitive dashboards for pipeline monitoring, auto-schema management, custom ingestion/loading schedules. 

All of this combined with transparent pricing and 24×7 support makes us the most loved data pipeline software on review sites.

Take our 14-day free trial to experience a better way to manage data pipelines.

How to Enable & Disable Snowflake Time Travel Feature? 

1) enable snowflake time travel.

To enable Snowflake Time Travel, no chores are necessary. It is turned on by default, with a one-day retention period . However, if you want to configure Longer Data Retention Periods of up to 90 days for Databases, Schemas, and Tables, you’ll need to upgrade to Snowflake Enterprise Edition. Please keep in mind that lengthier Data Retention necessitates more storage, which will be reflected in your monthly Storage Fees. See Storage Costs for Time Travel and Fail-safe for further information on storage fees.

For Snowflake Time Travel, the example below builds a table with 90 days of retention.

To shorten the retention term for a certain table, the below query can be used.

2) Disable Snowflake Time Travel

Snowflake Time Travel cannot be turned off for an account, but it can be turned off for individual Databases, Schemas, and Tables by setting the object’s DATA_RETENTION_TIME_IN_DAYS to 0.

Users with the ACCOUNTADMIN role can also set DATA_RETENTION_TIME_IN_DAYS to 0 at the account level, which means that by default, all Databases (and, by extension, all Schemas and Tables) created in the account have no retention period. However, this default can be overridden at any time for any Database, Schema, or Table.

3) What are Data Retention Periods?

Data Retention Time is an important part of Snowflake Time Travel. Snowflake preserves the state of the data before the update when data in a table is modified, such as deletion of data or removing an object containing data. The Data Retention Period sets the number of days that this historical data will be stored, allowing Time Travel operations ( SELECT, CREATE… CLONE, UNDROP ) to be performed on it.

All Snowflake Accounts have a standard retention duration of one day (24 hours) , which is automatically enabled:

  • At the account and object level in Snowflake Standard Edition , the Retention Period can be adjusted to 0 (or unset to the default of 1 day) (i.e. Databases, Schemas, and Tables).
  • The Retention Period can be set to 0 for temporary Databases, Schemas, and Tables (or unset back to the default of 1 day ). The same can be said of Temporary Tables.
  • The Retention Time for permanent Databases, Schemas, and Tables can be configured to any number between 0 and 90 days .

4) What are Snowflake Time Travel SQL Extensions?

The following SQL extensions have been added to facilitate Snowflake Time Travel:

  • OFFSET (time difference in seconds from the present time)
  • STATEMENT (identifier for statement, e.g. query ID)
  • For Tables, Schemas, and Databases, use the UNDROP command.

Snowflake Time Travel: SQL Extensions

How Many Days Does Snowflake Time Travel Work? 

How to specify a custom data retention period for snowflake time travel .

The maximum Retention Time in Standard Edition is set to 1 day by default (i.e. one 24 hour period). The default for your account in Snowflake Enterprise Edition (and higher) can be set to any value up to 90 days :

  • The account default can be modified using the DATA_RETENTION_TIME IN_DAYS argument in the command when creating a Table, Schema, or Database.
  • If a Database or Schema has a Retention Period , that duration is inherited by default for all objects created in the Database/Schema.

The Data Retention Time can be set in the way it has been set in the example below. 

Using manual scripts and custom code to move data into the warehouse is cumbersome. Frequent breakages, pipeline errors and lack of data flow monitoring makes scaling such a system a nightmare. Hevo’s reliable data pipeline platform enables you to set up zero-code and zero-maintenance data pipelines that just work.

  • Reliability at Scale : With Hevo, you get a world-class fault-tolerant architecture that scales with zero data loss and low latency. 
  • Monitoring and Observability : Monitor pipeline health with intuitive dashboards that reveal every stat of pipeline and data flow. Bring real-time visibility into your ELT with Alerts and Activity Logs  
  • Stay in Total Control : When automation isn’t enough, Hevo offers flexibility – data ingestion modes, ingestion, and load frequency, JSON parsing, destination workbench, custom schema management, and much more – for you to have total control.    
  • Auto-Schema Management : Correcting improper schema after the data is loaded into your warehouse is challenging. Hevo automatically maps source schema with destination warehouse so that you don’t face the pain of schema errors.
  • 24×7 Customer Support : With Hevo you get more than just a platform, you get a partner for your pipelines. Discover peace with round the clock “Live Chat” within the platform. What’s more, you get 24×7 support even during the 14-day full-feature free trial.
  • Transparent Pricing : Say goodbye to complex and hidden pricing models. Hevo’s Transparent Pricing brings complete visibility to your ELT spend. Choose a plan based on your business needs. Stay in control with spend alerts and configurable credit limits for unforeseen spikes in data flow. 

How to Modify Data Retention Period for Snowflake Objects?

When you alter a Table’s Data Retention Period, the new Retention Period affects all active data as well as any data in Time Travel. Whether you lengthen or shorten the period has an impact:

1) Increasing Retention 

This causes the data in Snowflake Time Travel to be saved for a longer amount of time.

For example, if you increase the retention time from 10 to 20 days on a Table, data that would have been destroyed after 10 days is now kept for an additional 10 days before being moved to Fail-Safe. This does not apply to data that is more than 10 days old and has previously been put to Fail-Safe mode .

2) Decreasing Retention

  • Temporal Travel reduces the quantity of time data stored.
  • The new Shorter Retention Period applies to active data updated after the Retention Period was trimmed.
  • If the data is still inside the new Shorter Period , it will stay in Time Travel.
  • If the data is not inside the new Timeframe, it is placed in Fail-Safe Mode.

For example, If you have a table with a 10-day Retention Term and reduce it to one day, data from days 2 through 10 will be moved to Fail-Safe, leaving just data from day 1 accessible through Time Travel.

However, since the data is moved from Snowflake Time Travel to Fail-Safe via a background operation, the change is not immediately obvious. Snowflake ensures that the data will be migrated, but does not say when the process will be completed; the data is still accessible using Time Travel until the background operation is completed.

Use the appropriate ALTER <object> Command to adjust an object’s Retention duration. For example, the below command is used to adjust the Retention duration for a table:

How to Query Snowflake Time Travel Data?

When you make any DML actions on a table, Snowflake saves prior versions of the Table data for a set amount of time. Using the AT | BEFORE Clause, you can Query previous versions of the data.

This Clause allows you to query data at or immediately before a certain point in the Table’s history throughout the Retention Period . The supplied point can be either a time-based (e.g., a Timestamp or a Time Offset from the present) or a Statement ID (e.g. SELECT or INSERT ).

  • The query below selects Historical Data from a Table as of the Date and Time indicated by the Timestamp:
  • The following Query pulls Data from a Table that was last updated 5 minutes ago:
  • The following Query collects Historical Data from a Table up to the specified statement’s Modifications, but not including them:

How to Clone Historical Data in Snowflake? 

The AT | BEFORE Clause, in addition to queries, can be combined with the CLONE keyword in the Construct command for a Table, Schema, or Database to create a logical duplicate of the object at a specific point in its history.

Consider the following scenario:

  • The CREATE TABLE command below generates a Clone of a Table as of the Date and Time indicated by the Timestamp:
  • The following CREATE SCHEMA command produces a Clone of a Schema and all of its Objects as they were an hour ago:
  • The CREATE DATABASE command produces a Clone of a Database and all of its Objects as they were before the specified statement was completed:

Using UNDROP Command with Snowflake Time Travel: How to Restore Objects? 

The following commands can be used to restore a dropped object that has not been purged from the system (i.e. the item is still visible in the SHOW object type> HISTORY output):

  • UNDROP DATABASE
  • UNDROP TABLE
  • UNDROP SCHEMA

UNDROP returns the object to its previous state before the DROP command is issued.

A Database can be dropped using the UNDROP command. For example,

Similarly, you can UNDROP Tables and Schemas . 

Snowflake Fail-Safe vs Snowflake Time Travel: What is the Difference?

In the event of a System Failure or other Catastrophic Events , such as a Hardware Failure or a Security Incident, Fail-Safe ensures that Historical Data is preserved . While Snowflake Time Travel allows you to Access Historical Data (that is, data that has been updated or removed) at any point in time. 

Fail-Safe mode allows Snowflake to recover Historical Data for a (non-configurable) 7-day period . This time begins as soon as the Snowflake Time Travel Retention Period expires.

This article has exposed you to the various Snowflake Time Travel to help you improve your overall decision-making and experience when trying to make the most out of your data. In case you want to export data from a source of your choice into your desired Database/destination like Snowflake , then Hevo is the right choice for you! 

However, as a Developer, extracting complex data from a diverse set of data sources like Databases, CRMs, Project management Tools, Streaming Services, and Marketing Platforms to your Database can seem to be quite challenging. If you are from non-technical background or are new in the game of data warehouse and analytics, Hevo can help!

Hevo will automate your data transfer process, hence allowing you to focus on other aspects of your business like Analytics, Customer Management, etc. Hevo provides a wide range of sources – 150+ Data Sources (including 40+ Free Sources) – that connect with over 15+ Destinations. It will provide you with a seamless experience and make your work life much easier.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Harsh comes with experience in performing research analysis who has a passion for data, software architecture, and writing technical content. He has written more than 100 articles on data integration and infrastructure.

No-code Data Pipeline for Snowflake

  • Snowflake Commands

Hevo - No Code Data Pipeline

Select Source

Continue Reading

Suraj Kumar Joshi

Athena vs Redshift Serverless: The Ultimate Guide on Data Query Service

snowflake time travel setting

Sarthak Bhardwaj

Snowflake vs Redshift: 6 Critical Differences

snowflake time travel setting

Veeresh Biradar

Redshift vs BigQuery: 7 Critical Differences

I want to read this e-book.

snowflake time travel setting

snowflake time travel setting

How to Leverage the Time Travel Feature on Snowflake

Welcome to Time Travel in the Snowflake Data Cloud . You may be tempted to think “only superheroes can Time Travel,” and you would be right. But Snowflake gives you the ability to be your own real-life superhero. 

Have you ever feared deleting the wrong data in your production database? Or that your carefully written script might accidentally remove the wrong records? Never fear, you are here – with Snowflake Time Travel!

What’s The Big Deal?

Snowflake Time Travel, when properly configured, allows for any Snowflake user with the proper permissions to recover and query data that has been changed or deleted up to the last 90 days (though this recovery period is dependent on the Snowflake version, as we’ll see later.) 

This provides comprehensive, robust, and configurable data history in Snowflake that your team doesn’t have to manage! It includes the following advantages:

  • Data (or even entire databases and schemas) can be restored that may have been lost due to a deletion, no matter if that deletion was on purpose or not
  • The ability to maintain backup copies of your data for all past versions of it for a period of time
  • Allowing for inspection of changes made over specific periods of time

To further investigate these features, we will look at:

  • How Time Travel works
  • How to configure Time Travel in your account
  • How to use Time Travel
  • How Time Travel impacts Snowflake cost
  • Some Time Travel best practices

How Time Travel Works

Before we learn how to use it, let’s understand a little more about why Snowflake can offer this feature. 

Snowflake stores the records in each table in immutable objects called micro-partitions that contain a subset of the records in a given table. 

Each time a record is changed (created/updated/deleted), a brand new micro-partition is created, preserving the previous micro-partitions to create an immutable historical record of the data in the table at any given moment in time. 

Time Travel is simply accessing the micro-partitions that were current for the table at a particular moment in time.  

How To Configure Time Travel In Your Account

Time Travel is available and enabled in all account types.

However, the extent to which it is available is dependent on the type of Snowflake account, the object type, and the access granted to your user.  

Default Retention Period

The retention period is the amount of time you can travel back and recover the state of a table at a given point and time. It is variable per account type. The default Time Travel retention period is 1 day (24 hours).

PRO TIP: Snowflake does have an additional layer of data protection called fail-safe , which is only accessible by Snowflake to restore customer data past the time travel window.  However, unlike time travel, it should not be considered as a part of your organization’s backup strategy.

Account/Object Type Considerations

All Snowflake accounts have Time Travel for permanent databases, schemas, and tables enabled for the default retention period.

Snowflake Standard accounts (and above) can remove Time Travel retention altogether by setting the retention period to 0 days, effectively disabling Time Travel. 

Snowflake Enterprise accounts (and above) can set the Time Travel retention period for transient databases, schemas, tables, and temporary tables to either 0 or 1 day. The retention period can also be increased to 0-90 days for permanent databases, schemas, and tables.

The following table summarizes the above considerations:

Changing Retention Period

For the Snowflake Enterprise accounts; two account level parameters can be used to change the default account level retention time.  

  • DATA_RETENTION_TIME_IN_DAYS: How many days that Snowflake stores historical data for the purpose of Time Travel.
  • MIN_DATA_RETENTION_TIME_IN_DAYS: How many days at a minimum that Snowflake stores historical data for the purpose of Time Travel.

The parameter DATA_RETENTION_TIME_IN_DAYS can also be used at an object level to override the default retention time for an object and its children. Example: 

How To Use Time Travel

Using Time Travel is easy! There are two sets of SQL commands that can invoke Time Travel capabilities:

  • AT or BEFORE : clauses for both SELECT and CREATE .. CLONE statements.  AT is inclusive and BEFORE is exclusive
  • UNDROP : command for restoring a deleted table/schema/database

The following graphic from the Snowflake documentation summarizes this visually:

A screenshot illustrating the the Snowflake Data lifecycle with Time Travel

Query Historical Data

You can query historical data using the AT or BEFORE clauses and one of three parameters:

  • TIMESTAMP :  A specific historical timestamp at which to query data from a particular object.  Example: SELECT * FROM my_table AT (TIMESTAMP => ‘Fri, 01 May 2015 15:00:00 -0700’::TIMESTAMP_TZ);
  • OFFSET : The difference in seconds from the current time at which to query data from a particular object.  Example: CREATE SCHEMA restored_schema CLONE my_schema AT (OFFSET => -4800);
  • STATEMENT : The query ID of a statement that is used as a reference point from which to query data from a particular object.  Example: CREATE DATABASE restored_db CLONE my_db BEFORE (STATEMENT => ‘8e5d0ca9-005e-44e6-b858-a8f5b37c5726’);

The one thing to understand is that these commands will work only within the retention period for the object that you are querying against. So, if your retention time is set to the default one day, and you try to UNDROP a table two days after deleting it, you receive an error and be out of luck! 

PRO TIP: Snowflake does have an additional layer of data protection called fail-safe , which is only accessible by Snowflake to restore customer data past the time travel window. However, unlike time travel, it should not be considered as a part of your organization’s backup strategy.

Restore Deleted Objects

You can also restore objects that have been deleted by using the UNDROP command.  To use this command, another table with the same fully qualified name (database.schema.table) cannot exist.  

Example: UNDROP TABLE my_table

How Time Travel Impacts Snowflake Cost

Snowflake accounts are billed for the number of 24-hour periods that Time Travel data (the micro-partitions) is necessary to be maintained for the data that is being retained. 

Every time there is a change in a table’s data, the historical version of that changed data will be retained (and charged in addition) for the entire retention period. This may not be an entire second copy of the table. Snowflake will try to optimize to maintain only the minimal amount of historical data needed but will incur additional costs. 

As an example, if every row of a 100 GB table were changed ten times a day, the storage consumed (and charged) for this data per day would be 100GB x 10 changes = 1 TB.  

What can you do to optimize cost to ensure your ops team does not wake up to an unnecessarily large Time Travel bill?  Below are a couple of suggestions.

Use Transient and Temporary Tables When Possible

If data does not need to be protected using Time Travel, or there is data only being used as an intermediate stage in an ETL process, then take advantage of using transient and temporary tables with the DATA_RETENTION_TIME_IN_DAYS parameter set to 0. This will essentially disable Time Travel and make sure there are no extra costs because of it. 

Copy Large High-Churn Tables

If you have large permanent tables where a high percentage of records are often changed every day, it might be a good idea to change your storage strategy for these tables based on the cost implications mentioned above.  

One way of dealing with such a table would be to create it as a transient table with 0 Time Travel retention (DATA_RETENTION_TIME_IN_DAYS=0) and copy it over to a permanent table on a periodic basis.  

This would allow you to control the number of copies of this data you maintain without worrying about ballooning Time Travel costs in the background. 

Time Travel is an incredibly useful tool that removes the need for your team to maintain backups/snapshots/complex restoration processes/etc… as with a traditional database.  Specifically, it enables the following advantages:

  • Data recovery/restoration : use the ability to query historical data to restore old versions of a particular dataset, or recover databases/schemas/tables that have been deleted
  • Backups : If not explicitly disabled, time travel automatically is maintaining backup copies of all past versions of your data for at least 1 day, and up to 90 days
  • Change Auditing : The queryable nature of time travel allows for inspection of changes made to your data over specific periods of time

Final Thoughts

Hopefully, this has helped understand how to use Snowflake Time Travel and the context around how it works, and some of the cost implications.  

If your organization needs help using or configuring Time Travel, or any other Snowflake feature, phData is a certified Elite Snowflake partner, and we would love to hear from you so that our team can help drive the value of your organization’s data forward!

snowflake time travel setting

More to explore

snowflake time travel setting

Top 5 Fivetran Connectors for Healthcare

snowflake time travel setting

How to Migrate Hive Tables From Hadoop Environment to Snowflake Using Spark Job

snowflake time travel setting

Beyond the Data: Franco Borgiani, Data Engineer

snowflake time travel setting

Join our team

  • About phData
  • Leadership Team
  • All Technology Partners
  • Case Studies
  • phData Toolkit

Subscribe to our newsletter

  • © 2023 phData
  • Privacy Policy
  • Accesibility Policy
  • Website Terms of Use
  • Data Processing Agreement
  • End User License Agreement

snowflake time travel setting

Data Coach is our premium analytics training program with one-on-one coaching from renowned experts.

  • Data Coach Overview
  • Course Collection

Accelerate and automate your data projects with the phData Toolkit

  • Get Started
  • Financial Services
  • Manufacturing
  • Retail and CPG
  • Healthcare and Life Sciences
  • Call Center Analytics Services
  • Snowflake Native Streaming of HL7 Data
  • Snowflake Retail & CPG Supply Chain Forecasting
  • Snowflake Plant Intelligence For Manufacturing
  • Snowflake Demand Forecasting For Manufacturing
  • Snowflake Data Collaboration For Manufacturing

snowflake time travel setting

  • MLOps Framework
  • Teradata to Snowflake
  • Cloudera CDP Migration

Technology Partners

Other technology partners.

snowflake time travel setting

Check out our latest insights

snowflake time travel setting

  • Dashboard Library
  • Whitepapers and eBooks

Data Engineering

Consulting, migrations, data pipelines, dataops, change management, enablement & learning, coe, coaching, pmo, data science and machine learning services, mlops enablement, prototyping, model development and deployment, strategy services, data, analytics, and ai strategy, architecture and assessments, reporting, analytics, and visualization services, self-service, integrated analytics, dashboards, automation, elastic operations, data platforms, data pipelines, and machine learning.

ThinkETL

Overview of Snowflake Time Travel

Consider a scenario where instead of dropping a backup table you have accidentally dropped the actual table (or) instead of updating a set of records, you accidentally updated all the records present in the table (because you didn’t use the Where clause in your update statement).

What would be your next action after realizing your mistake? You must be thinking to go back in time to a period where you didn’t execute your incorrect statement so that you can undo your mistake.

Snowflake provides this exact feature where you could get back to the data present at a particular period of time. This feature in Snowflake is called Time Travel .

Let us understand more about Snowflake Time Travel in this article with examples.

1. What is Snowflake Time Travel?

Snowflake Time Travel enables accessing historical data that has been changed or deleted at any point within a defined period. It is a powerful CDP (Continuous Data Protection) feature which ensures the maintenance and availability of your historical data.

Snowflake Continuous Data Protection Lifecycle

Below actions can be performed using Snowflake Time Travel within a defined period of time:

  • Restore tables, schemas, and databases that have been dropped.
  • Query data in the past that has since been updated or deleted.
  • Create clones of entire tables, schemas, and databases at or before specific points in the past.

Once the defined period of time has elapsed, the data is moved into Snowflake Fail-Safe and these actions can no longer be performed.

2. Restoring Dropped Objects

A dropped object can be restored within the Snowflake Time Travel retention period using the “UNDROP” command.

Consider we have a table ‘Employee’ and it has been dropped accidentally instead of a backup table.

Dropping Employee table

It can be easily restored using the Snowflake UNDROP command as shown below.

Restoring Employee table using UNDROP

Databases and Schemas can also be restored using the UNDROP command.

Calling UNDROP restores the object to its most recent state before the DROP command was issued.

3. Querying Historical Objects

When unwanted DML operations are performed on a table, the Snowflake Time Travel feature enables querying earlier versions of the data using the  AT | BEFORE  clause.

The AT | BEFORE clause is specified in the FROM clause immediately after the table name and it determines the point in the past from which historical data is requested for the object.

Let us understand with an example. Consider the table Employee. The table has a field IS_ACTIVE which indicates whether an employee is currently working in the Organization.

Employee table

The employee ‘Michael’ has left the organization and the field IS_ACTIVE needs to be updated as FALSE. But instead you have updated IS_ACTIVE as FALSE for all the records present in the table.

Updating IS_ACTIVE in Employee table

There are three different ways you could query the historical data using AT | BEFORE Clause.

3.1. OFFSET

“ OFFSET” is the time difference in seconds from the present time.

The following query selects historical data from a table as of 5 minutes ago.

Querying historical data using OFFSET

3.2. TIMESTAMP

Use “TIMESTAMP” to get the data at or before a particular date and time.

The following query selects historical data from a table as of the date and time represented by the specified timestamp.

Querying historical data using TIMESTAMP

3.3. STATEMENT

Identifier for statement, e.g. query ID

The following query selects historical data from a table up to, but not including any changes made by the specified statement.

Querying historical data using STATEMENT

The Query ID used in the statement belongs to Update statement we executed earlier. The query ID can be obtained from “Open History”.

4. Cloning Historical Objects

We have seen how to query the historical data. In addition, the AT | BEFORE clause can be used with the CLONE keyword in the CREATE command to create a logical duplicate of the object at a specified point in the object’s history.

The following queries show how to clone a table using AT | BEFORE clause in three different ways using OFFSET, TIMESTAMP and STATEMENT.

To restore the data in the table to a historical state, create a clone using AT | BEFORE clause, drop the actual table and rename the cloned table to the actual table name.

5. Data Retention Period

A key component of Snowflake Time Travel is the data retention period.

When data in a table is modified, deleted or the object containing data is dropped, Snowflake preserves the state of the data before the update. The data retention period specifies the number of days for which this historical data is preserved.

Time Travel operations can be performed on the data during this data retention period of the object. When the retention period ends for an object, the historical data is moved into Snowflake Fail-safe.

6. How to find the Time Travel Data Retention period of Snowflake Objects?

SHOW PARAMETERS command can be used to find the Time Travel retention period of Snowflake objects.

Below commands can be used to find the data retention period of data bases, schemas and tables.

The DATA_RETENTION_TIME_IN_DAYS parameters specifies the number of days to retain the old version of deleted/updated data.

The below image shows that the table Employee has the DATA_RETENTION_TIME_IN_DAYS value set as 1.

Query showing Data Retention Period of Employee table

7. How to set custom Time-Travel Data Retention period for Snowflake Objects?

Time travel is automatically enabled with the standard, 1-day retention period. However, you may wish to upgrade to Snowflake Enterprise Edition or higher to enable configuring longer data retention periods of up to 90 days for databases, schemas, and tables.

You can configure the data retention period of a table while creating the table as shown below.

To modify the data retention period of an existing table, use below syntax

The below image shows that the data retention period of table is altered to 30 days.

Altering Data Retention Period of Employee table

A retention period of 0 days for an object effectively disables Time Travel for the object.

8. Data Retention Period Rules and Inheritance

Changing the retention period for your account or individual objects changes the value for all lower-level objects that do not have a retention period explicitly set. For example:

  • If you change the retention period at the account level, all databases, schemas, and tables that do not have an explicit retention period automatically inherit the new retention period.
  • If you change the retention period at the schema level, all tables in the schema that do not have an explicit retention period inherit the new retention period.

Currently, when a database is dropped, the data retention period for child schemas or tables, if explicitly set to be different from the retention of the database, is not honored. The child schemas or tables are retained for the same period of time as the database.

  • To honor the data retention period for these child objects (schemas or tables), drop them explicitly before you drop the database or schema.

Related Articles:

What is Snowflake?

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Related Posts

QUALIFY clause in Snowflake

QUALIFY in Snowflake: Filter Window Functions

GROUP BY ALL in Snowflake

GROUP BY ALL in Snowflake

Rank Transformation in Informatica Cloud

Rank Transformation in Informatica Cloud (IICS)

Subscribe to our blog!

Thank you for your submission., how to manage gdpr compliance with snowflake’s time travel and disaster recovery.

  • Cybersecurity

How to manage GDPR compliance with Snowflake’s Time Travel and Disaster Recovery

One year after implementation, the European Union’s General Data Protection Regulation ( GDPR ) continues to be a hot regulatory topic. As organizations work to bring their data practices into compliance with the new law, one question comes up repeatedly: How does Snowflake, the data warehouse built for the cloud, enable my organization to be GDPR compliant?

My answer tends to surprise people. Simply put, compliance is not a function of your database but rather a function of the design you choose . Although Snowflake provides the cloud-based technology and tools that enable compliance, each organization maintains sole responsibility for designing an architecture that is, in fact, GDPR compliant.

With that said, Snowflake offers some powerful features that don’t exist in other databases. Therefore, it behooves database architects to have a working knowledge of Snowflake’s data protection and recovery features when designing their cloud-based data warehouse.

How Time Travel and Fail-Safe work

Snowflake provides continuous data protection (CDP) with two features called Time Travel and Fail-Safe . These unique features eliminate traditional data warehousing challenges (costly backups, time-consuming rollbacks) and enable teams to experiment with their data with confidence, knowing that it will never get lost accidentally.

Time Travel

System administrators can use Time Travel to revert back to any point in the last 24 hours. This feature is useful whenever a mistake is made (for example, table or schema is dropped in production) or a failed release requires a database rollback (for example, a new ETL operation corrupts the data). Through a simple SQL interface, data can be restored based on a  point in time or a query ID, at the database, table, and schema level. By default, Time Travel is always on for Snowflake customers and is set to 24 hours, although enterprise customers have the capability to set Time Travel for any window up to 90 days.

If you accidentally drop a table or database and if the Time Travel window has passed, Snowflake offers a “get out of jail free” card called Fail-Safe. This data recovery feature provides seven days in which you can contact the Snowflake Support Team to bring your data back. A Snowflake administrator must complete this restoration, because the data is inaccessible to an end user. Once the Fail-Safe seven-day window passes, data is removed permanently from Snowflake and the cloud, so it’s important to act quickly.

Best practices for GDPR compliance with CDP

GDPR compliance can be extremely challenging if you don’t have a well-thought-out database architecture, especially for handling the “right to erasure (right to be forgotten)” in GDPR Article 17. Once an individual’s personally identifiable information (PII) is requested, organizations have 30 to 90 days in which to delete the individual’s PII from their database.

Two questions often arise at this point:

  • How do you ensure that an individual’s PII is removed completely and permanently removed from your database?
  • What do you need to account for in the data architecture, given the automated recovery measures that exist in Snowflake for CDP?

Based on strong data management principles, here are three best practices that alleviate concerns around GDPR compliance while you use Snowflake’s CDP features.

#1: Build a data model that segregates PII data

Arguably the most important data management decision you can make is to build a data model that segregates PII data into a separate table or set of tables. By creating an inventory, you can identify and account for every type of PII data you hold. This best practice is key for adhering to privacy regulations because it makes PII data simpler to find and delete.

The pitfalls of alternative strategies demonstrate why PII data segregation is your strongest option:

  • The risk of losing peripheral data : If PII data is interspersed in a big table with, say, 100 columns, and 20 of those columns are PII data, what happens when you need to delete PII for a single individual? You will likely end up deleting a row from the table that also eliminates 80 columns of non-GDPR-related data that could be valuable for analytical and business purposes.
  • Reliance on costly update operations : You can run an operation that obfuscates all the PII data in a table by scrambling the targeted information and leaving the other data intact. However, that procedure is prone to errors and amounts to a much more expensive methodology than simply deleting data from a separate PII table from the get-go.

#2: Conduct batch deletions and apply Time Travel parameters

Rather than carry out PII deletions as requests come in, borrow a best practice from HIPAA (Health Insurance Portability and Accountability Act) and use batch deletions. By adding a GDPR delete flag and date to your data management process, you can execute a batch process once a month within the 30-day GDPR window.

For PII erasure requests, you must consider Time Travel and its setting. For example, GDPR regulations provide 30 days to delete PII (and up to 90 days under extenuating circumstances), which means that in Snowflake’s enterprise version, you should set Time Travel for PII-specific tables to no more than 30 days; otherwise, the data could be inadvertently restored.

Conversely, another useful aspect of Time Travel is that if you inadvertently delete the wrong person’s data, you can easily do a point-in-time restore of just those records (if you are still within the Time Travel window). This strategy allows recovery from a mistake without violating GDPR.

#3: Implement tracking

Another best practice is to maintain a table where you track PII erasure requests and PII deletions. This tracking approach also helps you avoid any rollback issues, which is an important safety concern when using Time Travel. For instance, if you happen to restore back to a time before a batch deletion was executed, you’ll know to query the metadata table so you can delete the PII data again.

The same holds true with Fail-Safe, which allows the restoration of all your “lost” or deleted data. As such, you may need to use your list of PII erasure requests to delete those individuals’ PII again. The good news is that Fail-Safe operates within a seven-day period, so you’ll always be within the 90-day GDPR window if you do monthly batch deletions.

At the heart of the EU law is the mandate for organizations to take full responsibility for the data they hold. This regulation is putting a much-needed focus on database architecture and management principles that ultimately makes companies better at safeguarding data.

If you design your database architecture with the most-restrictive privacy policies and regulations in mind, you can avoid heavy refactoring in the future. Today, that means adhering to GDPR and implementing a database design that keeps all your PII ducks in a row while still benefiting from Snowflake’s CDP.

GDPR: What It Is, Why It Matters, and How Snowflake Enables Your...

GDPR: What It Is, Why It Matters, and How Snowflake Enables Your...

An introduction to GDPR, “right to be forgotten” options, and best practices for implementing GDPR using powerful and...

Embedded Applications: Powering Modern Life and Business

Embedded Applications: Powering Modern Life and Business

Embedded applications are a type of software designed to perform a very specific set of functions within a larger system.

Big Data Management

Big Data Management

Learn about Big Data management, which is the key to unlocking the analytical potential of any organization's information. Examine what defines Big Data.

GDPR Requirements: Two Years Later

GDPR Requirements: Two Years Later

The road map for a corporation to achieve GDPR requirements varies from source to source. Every business within the EU should understand what is being collected, how it's being used and how long it...

Can't find what you're looking for? Ask The Community  

How to query time travel with a time other than the default (UTC)

The time travel feature supports querying data with an AT <timestamp> clause. By default, the timestamp specified in the AT clause is considered as a timestamp with UTC timezone (equivalent to TIMESTAMP_NTZ). 

Was this article helpful? Yes No

MOST VIEWED

  • Client Release History (Prior to January 2022)
  • 4.39 Release Notes - November 16-23, 2020
  • How To: Test Azure OAuth Connection To Snowflake End To End Using Python (Client Credentials Flow)
  • How to configure Azure to issue OAuth tokens on behalf of a client to access Snowflake
  • 5.12 Behavior Change Release Notes - April 12-13, 2021

© 2024 Snowflake Inc. All Rights Reserved | If you'd rather not receive future emails from Snowflake, unsubscribe here or customize your communication preferences

Parameters ¶

Snowflake provides parameters that let you control the behavior of your account, individual user sessions, and objects. All the parameters have default values, which can be set and then overridden at different levels depending on the parameter type (Account, Session, or Object).

Parameter Hierarchy and Types ¶

This section describes the different types of parameters (Account, Session, and Object) and the levels at which each type can be set.

The following diagram illustrates the hierarchical relationship between the different parameter types and how individual parameters can be overridden at each level:

Account Parameters ¶

Account parameters can be set only at the account level by users with the appropriate administrator role. Account parameters are set using the ALTER ACCOUNT command.

Snowflake provides the following account parameters:

By default, account parameters are not displayed in the output of SHOW PARAMETERS . For more information about viewing account parameters, see Viewing the Parameters and Their Values (in this topic).

Session Parameters ¶

Most parameters are session parameters, which can be set at the following levels:

Account administrators can use the ALTER ACCOUNT command to set session parameters for the account. The values set for the account default to individual users and their sessions.

Administrators with the appropriate privileges (typically SECURITYADMIN role) can use the ALTER USER command to override session parameters for individual users. The values set for a user default to any sessions started by the user. In addition, users can override default sessions parameters for themselves using ALTER USER .

Users can use the ALTER SESSION to explicitly set session parameters within their sessions.

By default, only session parameters are displayed in the output of SHOW PARAMETERS . For more information about viewing account and object parameters, see Viewing the Parameters and Their Values (in this topic).

Object Parameters ¶

Object parameters can be set at the following levels:

Account administrators can use the ALTER ACCOUNT command to set object parameters for the account. The values set for the account default to the objects created in the account.

Users with the appropriate privileges can use the corresponding CREATE <object> or ALTER <object> commands to override object parameters for an individual object.

Snowflake provides the following object parameters:

By default, object parameters are not displayed in the output of SHOW PARAMETERS . For more information about viewing object parameters, see Viewing the Parameters and Their Values (in this topic).

Viewing the Parameters and Their Values ¶

Snowflake provides the SHOW PARAMETERS command, which displays a list of the parameters, along with the current and default values for each parameter. The command can be called with different options to determine the type of parameter displayed.

Viewing Session and Object Parameters ¶

By default, the command displays only session parameters:

SHOW PARAMETERS ; Copy

To display the object parameters for a specific object, include an IN clause with the object type and name. For example:

SHOW PARAMETERS IN DATABASE mydb ; SHOW PARAMETERS IN WAREHOUSE mywh ; Copy

Viewing All Parameters ¶

To display all parameters, including account and object parameters, include an IN ACCOUNT clause:

SHOW PARAMETERS IN ACCOUNT ; Copy

Limiting the List of Parameters by Name ¶

The command also supports using a LIKE clause to limit the list of parameters by name. For example:

To display the session parameters whose names contain “time”:

SHOW PARAMETERS LIKE '%time%' ; Copy

To display all the parameters whose names start with “time”:

SHOW PARAMETERS LIKE 'time%' IN ACCOUNT ; Copy

The LIKE clause must come before the IN clause.

ABORT_DETACHED_QUERY ¶

Session — Can be set for Account » User » Session

Specifies the action that Snowflake performs for in-progress queries if connectivity is lost due to abrupt termination of a session (e.g. network outage, browser termination, service interruption).

TRUE : In-progress queries are aborted 5 minutes after connectivity is lost.

FALSE : In-progress queries are completed.

If the user explicitly closes the connection:

All in-progress synchronous queries are aborted immediately regardless of the parameter value.

When the value is set to FALSE , asynchronous queries continue to run until they complete, until they are canceled, or until the time limit specified for the STATEMENT_TIMEOUT_IN_SECONDS parameter expires. The default for the STATEMENT_TIMEOUT_IN_SECONDS parameter is two days.

Most queries require compute resources to execute. These resources are provided by virtual warehouses, which consume credits while running. With a value of FALSE , if the session terminates, warehouses might continue running and consuming credits to complete any queries that were in progress at the time the session terminated.

ALLOW_CLIENT_MFA_CACHING ¶

Account — Can only be set for Account

Specifies whether an MFA token can be saved in the client-side operating system keystore to promote continuous, secure connectivity without users needing to respond to an MFA prompt at the start of each connection attempt to Snowflake. For details and the list of supported Snowflake-provided clients, see Using MFA token caching to minimize the number of prompts during authentication — optional .

TRUE : Stores an MFA token in the client-side operating system keystore to enable the client application to use the MFA token whenever a new connection is established. While true, users are not prompted to respond to additional MFA prompts.

FALSE : Does not store an MFA token. Users must respond to an MFA prompt whenever the client application establishes a new connection with Snowflake.

ALLOW_ID_TOKEN ¶

Account — Can be set only for Account

Specifies whether a connection token can be saved in the client-side operating system keystore to promote continuous, secure connectivity without users needing to enter login credentials at the start of each connection attempt to Snowflake. For details and the list of supported Snowflake-provided clients, see Using connection caching to minimize the number of prompts for authentication — Optional .

TRUE : Stores a connection token in the client-side operating system keystore to enable the client application to perform browser-based SSO without prompting users to authenticate whenever a new connection is established.

FALSE : Does not store a connection token. Users are prompted to authenticate whenever the client application establishes a new connection with Snowflake. SSO to Snowflake is still possible if this parameter is set to false.

AUTOCOMMIT ¶

Specifies whether autocommit is enabled for the session. Autocommit determines whether a DML statement, when executed without an active transaction, is automatically committed after the statement successfully completes. For more information, see Transactions .

TRUE : Autocommit is enabled.

FALSE : Autocommit is disabled, meaning DML statements must be explicitly committed or rolled back.

AUTOCOMMIT_API_SUPPORTED (View-only) ¶

For Snowflake internal use only. View-only parameter that indicates whether API support for autocommit is enabled for your account. If the value is TRUE , you can enable or disable autocommit through the APIs for the following drivers/connectors:

JDBC driver

ODBC driver

Snowflake Connector for Python

BINARY_INPUT_FORMAT ¶

String (Constant)

The format of VARCHAR values passed as input to VARCHAR-to-BINARY conversion functions. For more information, see Binary Input and Output .

HEX , BASE64 , or UTF8 / UTF-8

BINARY_OUTPUT_FORMAT ¶

The format for VARCHAR values returned as output by BINARY-to-VARCHAR conversion functions. For more information, see Binary Input and Output .

HEX or BASE64

Object (for databases and schemas) — Can be set for Account » Database » Schema

Specifies a default catalog integration for Iceberg tables. For more information, see the Iceberg table documentation .

Any valid catalog integration identifier.

CLIENT_ENABLE_LOG_INFO_STATEMENT_PARAMETERS ¶

Session — Can be set only for Session

Enables users to log the data values bound to PreparedStatements .

To see the values, you must not only set this session-level parameter to TRUE , but also set the connection parameter named TRACING to either INFO or ALL .

Set TRACING to ALL to see all debugging information and all binding information.

Set TRACING to INFO to see the binding parameter values and less other debug information.

If you bind confidential information, such as medical diagnoses or passwords, that information is logged. Snowflake recommends making sure that the log file is secure, or only using test data, when you set this parameter to TRUE .

TRUE or FALSE .

CLIENT_ENCRYPTION_KEY_SIZE ¶

Specifies the AES encryption key size, in bits, used by Snowflake to encrypt/decrypt files stored on internal stages (for loading/unloading data) when you use the SNOWFLAKE_FULL encryption type.

This parameter is not used for encrypting/decrypting files stored in external stages (i.e. S3 buckets or Azure containers). Encryption/decryption of these files is accomplished using an external encryption key explicitly specified in the COPY command or in the named external stage referenced in the command.

If you are using the JDBC driver and you wish to set this parameter to 256 (for strong encryption), additional JCE policy files must be installed on each client machine from which data is loaded/unloaded. For more information about installing the required files, see Java requirements for the JDBC Driver .

If you are using the Python connector (or SnowSQL) and you wish to set this parameter to 256 (for strong encryption), no additional installation or configuration tasks are required.

CLIENT_MEMORY_LIMIT ¶

Parameter that specifies the maximum amount of memory the JDBC driver or ODBC driver should use for the result set from queries (in MB).

For the JDBC driver:

To simplify JVM memory management, the parameter sets a global maximum memory usage limit for all queries.

CLIENT_RESULT_CHUNK_SIZE specifies the maximum size of each set (or chunk ) of query results to download (in MB). The driver might require additional memory to process a chunk; if so, it will adjust memory usage during runtime to process at least one thread/query. Verify that CLIENT_MEMORY_LIMIT is set significantly higher than CLIENT_RESULT_CHUNK_SIZE to ensure sufficient memory is available.

For the ODBC driver:

This parameter is supported in version 2.22.0 and higher.

CLIENT_RESULT_CHUNK_SIZE is not supported.

The driver will attempt to honor the parameter value, but will cap usage at 80% of your system memory.

The memory usage limit set in this parameter does not apply to any other JDBC or ODBC driver operations (e.g. connecting to the database, preparing a query, or PUT and GET statements).

Any valid number of megabytes.

1536 (effectively 1.5 GB)

Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above.

In addition, the JDBC driver actively manages its memory conservatively to avoid using up all available memory.

CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX ¶

Session — Can be set for User » Session

For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases/schemas to the current database/schema. The narrower search typically returns fewer rows and executes more quickly.

For example, the getTables() JDBC method accepts a database name and schema name as arguments, and returns the names of the tables in the database and schema. If the database and schema arguments are null , then by default, the method searches all databases and all schemas in the account. Setting CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX to TRUE narrows the search to the current database and schema specified by the connection context .

In essence, setting this parameter to TRUE creates the following precedence for database and schema:

Values passed as arguments to the functions/methods. Values specified in the connection context (if any). Default (all databases and all schemas).

For more details, see the information below.

This parameter applies to the following:

JDBC driver methods (for the DatabaseMetaData class):

getCrossReference

getExportedKeys

getForeignKeys

getFunctions

getImportedKeys

getPrimaryKeys

ODBC driver functions:

SQLPrimaryKeys

SQLForeignKeys

SQLGetFunctions

SQLProcedures

TRUE : If the database and schema arguments are null , then the driver retrieves metadata for only the database and schema specified by the connection context .

The interaction is described in more detail in the table below.

FALSE : If the database and schema arguments are null , then the driver retrieves metadata for all databases and schemas in the account.

The connection context refers to the current database and schema for the session, which can be set using any of the following options:

Specify the default namespace for the user who connects to Snowflake (and initiates the session). This can be set for the user through the CREATE USER or ALTER USER command, but must be set before the user connects.

Specify the database and schema when connecting to Snowflake through the driver.

Issue a USE DATABASE or USE SCHEMA command within the session.

If the database or schema was specified by more than one of these, then the most recent one applies.

When CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is set to TRUE :

For the JDBC driver, this behavior applies to version 3.6.27 (and higher). For the ODBC driver, this behavior applies to version 2.12.96 (and higher).

If you want to search only the connection context database, but want to search all schemas within that database, see CLIENT_METADATA_USE_SESSION_DATABASE .

CLIENT_METADATA_USE_SESSION_DATABASE ¶

Session — Can be set for Session

This parameter applies to only the methods affected by CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX .

This parameter applies only when both of the following conditions are met:

CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is FALSE or unset.

No database or schema is passed to the relevant ODBC function or JDBC method.

For specific ODBC functions and JDBC methods, this parameter can change the default search scope from all databases to the current database. The narrower search typically returns fewer rows and executes more quickly.

The driver searches all schemas in the connection context’s database. (For more details about the connection context , see the documentation for CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX .)
The driver searches all schemas in all databases.

When the database is null and the schema is null and CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX is FALSE:

CLIENT_METADATA_USE_SESSION_DATABASE Behavior FALSE All schemas in all databases are searched. TRUE All schemas in the current database are searched.

CLIENT_PREFETCH_THREADS ¶

JDBC, ODBC, Python, .NET

Parameter that specifies the number of threads used by the client to pre-fetch large result sets. The driver will attempt to honor the parameter value, but defines the minimum and maximum values (depending on your system’s resources) to improve performance.

Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above, but also actively manages its thread count conservatively to avoid using up all available memory.

CLIENT_RESULT_CHUNK_SIZE ¶

JDBC, SQL API

Parameter that specifies the maximum size of each set (or chunk ) of query results to download (in MB). The JDBC driver downloads query results in chunks.

Also see CLIENT_MEMORY_LIMIT .

Most users should not need to set this parameter. If this parameter is not set by the user, the driver starts with the default specified above, but also actively manages its memory conservatively to avoid using up all available memory.

CLIENT_RESULT_COLUMN_CASE_INSENSITIVE ¶

Parameter that indicates whether to match column name case-insensitively in ResultSet.get* methods in JDBC.

TRUE : matches column names case-insensitively.

FALSE : matches column names case-sensitively.

CLIENT_SESSION_KEEP_ALIVE ¶

JDBC, ODBC, Python, Node.js

Parameter that indicates whether to force a user to log in again after a period of inactivity in the session.

TRUE : Snowflake keeps the session active indefinitely as long as the connection is active , even if there is no activity from the user.

FALSE : The user must log in again after four hours of inactivity.

Currently, the parameter only takes effect while initiating the session. You can modify the parameter value within the session level by executing an ALTER SESSION command, but it does not affect the session keep-alive functionality, such as extending the session. For information about setting the parameter at the session level, see the client documentation:

CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY ¶

SnowSQL, JDBC, Python, Node.js

Number of seconds in-between client attempts to update the token for the session.

900 to 3600

CLIENT_TIMESTAMP_TYPE_MAPPING ¶

Specifies the TIMESTAMP_* variation to use when binding timestamp variables for JDBC or ODBC applications that use the bind API to load data.

TIMESTAMP_LTZ or TIMESTAMP_NTZ

TIMESTAMP_LTZ

DATA_METRIC_SCHEDULE ¶

Object (for tables)

Specifies the schedule to run the data metric functions associated to the table.

The schedule can be based on a defined number of minutes, a cron expression, or a DML event on the table that does not involve reclustering. For details, see:

Data Metric Function Actions (dataMetricFunctionAction) .

Schedule your DMFs to run .

DATA_RETENTION_TIME_IN_DAYS ¶

Object (for databases, schemas, and tables) — Can be set for Account » Database » Schema » Table

Number of days for which Snowflake retains historical data for performing Time Travel actions (SELECT, CLONE, UNDROP) on the object. A value of 0 effectively disables Time Travel for the specified database, schema, or table. For more information, see Understanding & Using Time Travel .

0 or 1 (for Standard Edition )

0 to 90 (for Enterprise Edition or higher )

DATE_INPUT_FORMAT ¶

Specifies the input format for the DATE data type. For more information, see Date and Time Input and Output Formats .

Any valid, supported date format or AUTO

( AUTO specifies that Snowflake attempts to automatically detect the format of dates stored in the system during the session)

DATE_OUTPUT_FORMAT ¶

Specifies the display format for the DATE data type. For more information, see Date and Time Input and Output Formats .

Any valid, supported date format

DEFAULT_DDL_COLLATION ¶

Sets the default collation used for the following DDL operations:

CREATE TABLE

ALTER TABLE … ADD COLUMN

Setting this parameter forces all subsequently-created columns in the affected objects (table, schema, database, or account) to have the specified collation as the default, unless the collation for the column is explicitly defined in the DDL.

For example, if DEFAULT_DDL_COLLATION = 'en-ci' , then the following two statements are equivalent:

Any valid, supported collation specification .

Empty string

To set the default collation for the account, use the following command:

ALTER ACCOUNT

The default collation for table columns can be set at the table, schema, or database level during creation or any time afterwards:

CREATE TABLE or ALTER TABLE

CREATE SCHEMA or ALTER SCHEMA

CREATE DATABASE or ALTER DATABASE

ENABLE_IDENTIFIER_FIRST_LOGIN ¶

Determines the login flow for users. When enabled, Snowflake prompts users for their username or email address before presenting authentication methods. For details, see Identifier-first login .

TRUE : Snowflake uses an identifier-first login flow to authenticate users.

FALSE : Snowflake presents all possible login options, even if those options don’t apply to a particular user.

ENABLE_INTERNAL_STAGES_PRIVATELINK ¶

Specifies whether the SYSTEM$GET_PRIVATELINK_CONFIG function returns the private-internal-stages key in the query result. The corresponding value in the query result is used during the configuration process for private connectivity to internal stages.

TRUE : Returns the private-internal-stages key and value in the query result.

FALSE : Does not return the private-internal-stages key and value in the query result.

ENABLE_TRI_SECRET_AND_REKEY_OPT_OUT_FOR_IMAGE_REPOSITORY ¶

Specifies choice for the image repository to opt out of Tri-Secret Secure and Periodic rekeying .

TRUE : Opts out Tri-Secret Secure and Periodic Rekeying for Image Repository.

FALSE : Disallows the creation of an image repository for Tri-Secret Secure and periodic rekeying accounts. Similarly, disallows enabling Tri-Secret Secure and periodic rekeying for accounts that have enabled Image Repository.

ENABLE_TRI_SECRET_AND_REKEY_OPT_OUT_FOR_SPCS_BLOCK_STORAGE ¶

Specifies the choice for the Snowpark Container Services block storage volume to opt out of Tri-Secret Secure and Periodic rekeying .

TRUE : Opts out Tri-Secret Secure and periodic rekeying for Snowpark Container Services block storage volumes.

FALSE : Disallows the creation of block storage volumes for Tri-Secret Secure and periodic rekeying accounts. Similarly, disallows enabling Tri-Secret Secure and periodic rekeying for accounts that have enabled block storage volumes.

ENABLE_UNHANDLED_EXCEPTIONS_REPORTING ¶

Specifies whether Snowflake may capture – in an event table – log messages or trace event data for unhandled exceptions in procedure or UDF handler code. For more information, see Capturing messages from unhandled exceptions .

TRUE : Data about unhandled exceptions is captured as log or trace data if logging and tracing are enabled.

FALSE : Data about unhandled exceptions is not captured.

ENABLE_UNLOAD_PHYSICAL_TYPE_OPTIMIZATION ¶

Specifies whether to set the schema for unloaded Parquet files based on the logical column data types (i.e. the types in the unload SQL query or source table) or on the unloaded column values (i.e. the smallest data types and precision that support the values in the output columns of the unload SQL statement or source table).

TRUE : The schema of unloaded Parquet data files is determined by the column values in the unload SQL query or source table. Snowflake optimizes table columns by setting the smallest precision that accepts all of the values. The unloader follows this pattern when writing values to Parquet files. The data type and precision of an output column are set to the smallest data type and precision that support its values in the unload SQL statement or source table. Accept this setting for better performance and smaller data files.

FALSE : The schema is determined by the logical column data types. Set this value for a consistent output file schema.

ENABLE_UNREDACTED_QUERY_SYNTAX_ERROR ¶

User — Can be set for Account » User

Controls whether query text is redacted if a SQL query fails due to a syntax or parsing error. If FALSE , the content of a failed query is redacted in the views, pages, and functions that provide a query history.

Only users with a role that is granted or inherits the AUDIT privilege can set the ENABLE_UNREDACTED_QUERY_SYNTAX_ERROR parameter.

When using the ALTER USER command to set the parameter to TRUE for a particular user, modify the user that you want to see the query text, not the user who executed the query (if those are different users).

TRUE : Disables the redaction of query text for queries that fail due to a syntax or parsing error.

FALSE : Redacts the contents of a query from the views, pages, and functions that provide a query history when a query fails due to a syntax or parsing error.

ENFORCE_NETWORK_RULES_FOR_INTERNAL_STAGES ¶

Specifies whether a network policy that uses network rules can restrict access to AWS internal stages.

This parameter has no effect on network policies that do not use network rules.

This account-level parameter affects both account-level and user-level network policies.

For details about using network policies and network rules to restrict access to AWS internal stages, including the use of this parameter, see Protecting internal stages on AWS .

TRUE : Allows network polices that use network rules to restrict access to AWS internal stages. The network rule must also use the appropriate MODE and TYPE to restrict access to the internal stage.

FALSE : Network policies never restrict access to internal stages.

ERROR_ON_NONDETERMINISTIC_MERGE ¶

Specifies whether to return an error when the MERGE command is used to update or delete a target row that joins multiple source rows and the system cannot determine the action to perform on the target row.

TRUE : An error is returned that includes values from one of the target rows that caused the error.

FALSE : No error is returned and the merge completes successfully, but the results of the merge are nondeterministic.

ERROR_ON_NONDETERMINISTIC_UPDATE ¶

Specifies whether to return an error when the UPDATE command is used to update a target row that joins multiple source rows and the system cannot determine the action to perform on the target row.

FALSE : No error is returned and the update completes, but the results of the update are nondeterministic.

EVENT_TABLE ¶

Specifies the name of the event table for logging messages from stored procedures and UDFs in this account.

Any existing event table created by executing the CREATE EVENT TABLE command.

EXTERNAL_OAUTH_ADD_PRIVILEGED_ROLES_TO_BLOCKED_LIST ¶

Determines whether the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles can be used as the primary role when creating a Snowflake session based on the access token from the External OAuth authorization server.

TRUE : Adds the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles to the EXTERNAL_OAUTH_BLOCKED_ROLES_LIST property of the External OAuth security integration, which means these roles cannot be used as the primary role when creating a Snowflake session using External OAuth authentication.

FALSE : Removes the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN from the list of blocked roles defined by the EXTERNAL_OAUTH_BLOCKED_ROLES_LIST property of the External OAuth security integration.

EXTERNAL_VOLUME ¶

Specifies a default external volume for Iceberg tables. For more information, see the Iceberg table documentation .

Any valid external volume identifier.

GEOGRAPHY_OUTPUT_FORMAT ¶

Display format for GEOGRAPHY values .

For EWKT and EWKB, the SRID is always 4326 in the output. Refer to the note on EWKT and EWKB handling .

GeoJSON , WKT , WKB , EWKT , or EWKB

GEOMETRY_OUTPUT_FORMAT ¶

Display format for GEOMETRY values .

INITIAL_REPLICATION_SIZE_LIMIT_IN_TB ¶

Sets the maximum estimated size limit for the initial replication of a primary database to a secondary database (in TB). Set this parameter on any account that stores a secondary database. This size limit helps prevent accounts from accidentally incurring large database replication charges.

To remove the size limit, set the value to 0.0 .

Note that there is currently no default size limit applied to subsequent refreshes of a secondary database.

0.0 and above with a scale of at least 1 (e.g. 20.5 , 32.25 , 33.333 , etc.).

JDBC_ENABLE_PUT_GET ¶

Specifies whether to allow PUT and GET commands access to local file systems.

TRUE : JDBC enables PUT and GET commands.

FALSE : JDBC disables PUT and GET commands.

JDBC_TREAT_DECIMAL_AS_INT ¶

Specifies how JDBC processes columns that have a scale of zero ( 0 ).

TRUE : JDBC processes a column whose scale is zero as BIGINT.

FALSE : JDBC processes a column whose scale is zero as DECIMAL.

JDBC_TREAT_TIMESTAMP_NTZ_AS_UTC ¶

Specifies how JDBC processes TIMESTAMP_NTZ values.

By default, when the JDBC driver fetches a value of type TIMESTAMP_NTZ from Snowflake, it converts the value to “wallclock” time using the client JVM timezone.

Users who want to keep UTC timezone for the conversion can set this parameter to TRUE .

This parameter applies only to the JDBC driver.

TRUE : The driver uses UTC to get the TIMESTAMP_NTZ value in “wallclock” time.

FALSE : The driver uses the client JVM’s current timezone to get the TIMESTAMP_NTZ value in “wallclock” time.

JDBC_USE_SESSION_TIMEZONE ¶

Specifies whether the JDBC Driver uses the time zone of the JVM or the time zone of the session (specified by the TIMEZONE parameter) for the getDate() , getTime() , and getTimestamp() methods of the ResultSet class.

TRUE : The JDBC Driver uses the time zone of the session.

FALSE : The JDBC Driver uses the time zone of the JVM.

JSON_INDENT ¶

Specifies the number of blank spaces to indent each new element in JSON output in the session. Also specifies whether to insert newline characters after each element.

(a value of 0 returns compact output by removing all blank spaces and newline characters from the output)

This parameter does not affect JSON unloaded from a table into a file using the COPY INTO <location> command. The command always unloads JSON data in the NDJSON format:

Each record from the table separated by a newline character.

Within each record, compact formatting (i.e. no spaces or newline characters).

JS_TREAT_INTEGER_AS_BIGINT ¶

Specifies how the Snowflake Node.js Driver processes numeric columns that have a scale of zero ( 0 ), for example INTEGER or NUMBER(p, 0).

TRUE : JavaScript processes a column whose scale is zero as Bigint.

FALSE : JavaScript processes a column whose scale is zero as Number.

By default, Snowflake INTEGER columns (including BIGINT, NUMBER(p, 0), etc.) are converted to JavaScript’s Number data type. However, the largest legal Snowflake integer values are larger than the largest legal JavaScript Number values. To convert Snowflake INTEGER columns to JavaScript Bigint, which can store larger values than JavaScript Number, set the session parameter JS_TREAT_INTEGER_AS_BIGINT.

For examples of how to use this parameter, see Fetching integer data types as Bigint .

LOCK_TIMEOUT ¶

Number of seconds to wait while trying to lock a resource, before timing out and aborting the statement.

0 to any number (i.e. no limit) A value of 0 disables lock waiting (i.e. the statement must acquire the lock immediately or abort). If multiple resources need to be locked by the statement, the timeout applies separately to each lock attempt

43200 (i.e. 12 hours)

LOG_LEVEL ¶

Object (for databases, schemas, stored procedures, and UDFs) — Can be set for Account » Database » Schema » Procedure and Account » Database » Schema » Function

Specifies the severity level of messages that should be ingested and made available in the active event table. Messages at the specified level (and at more severe levels) are ingested. For more information about log levels, see Setting log level .

The following table lists the levels of messages ingested when you set the LOG_LEVEL parameter to a level.

If this parameter is set in both the session and the object (or schema, database, or account), the more verbose value is used. See Understanding how Snowflake determines the effective log level .

MAX_CONCURRENCY_LEVEL ¶

Object (for warehouses) — Can be set for Account » Warehouse

Specifies the concurrency level for SQL statements (i.e. queries and DML) executed by a warehouse. When the level is reached, the operation performed depends on whether the warehouse is a single-cluster or multi-cluster warehouse:

Single-cluster or multi-cluster (in Maximized mode): Statements are queued until already-allocated resources are freed or additional resources are provisioned, which can be accomplished by increasing the size of the warehouse.

Multi-cluster (in Auto-scale mode): Additional clusters are started.

MAX_CONCURRENCY_LEVEL can be used in conjunction with the STATEMENT_QUEUED_TIMEOUT_IN_SECONDS parameter to ensure a warehouse is never backlogged.

In general, it limits the number of statements that can be executed concurrently by a warehouse cluster, but there are exceptions. In the following cases, the actual number of statements executed concurrently by a warehouse might be more or less than the specified level:

Smaller, more basic statements: More statements might execute concurrently because small statements generally execute on a subset of the available compute resources in a warehouse. This means they only count as a fraction towards the concurrency level.

Larger, more complex statements: Fewer statements might execute concurrently.

This value is a default only and can be changed at any time:

Lowering the concurrency level for a warehouse can limit the number of concurrent queries running in a warehouse. When fewer queries are competing for the warehouse’s resources at a given time, a query can potentially be given more resources, which might result in faster query performance, particularly for a large/complex and multi-statement query.

Raising the concurrency level for a warehouse might decrease the compute resources that are available for a statement; however, it does not always limit the total number of concurrent queries that can be executed by the warehouse, nor does it necessarily impact total warehouse performance, which depends on the nature of the queries being executed.

Note that, as described earlier, this parameter impacts multi-cluster warehouses (in Auto-scale mode) because Snowflake automatically starts a new cluster within the multi-cluster warehouse to avoid queuing. Thus, lowering the concurrency level for a multi-cluster warehouse (in Auto-scale mode) potentially increases the number of active clusters at any time.

Also, remember that Snowflake automatically allocates resources for each statement when it is submitted and the allocated amount is dictated by the individual requirements of the statement. Based on this, and through observations of user query patterns over time, we’ve selected a default that balances performance and resource usage.

As such, before changing the default, we recommend that you test the change by adjusting the parameter in small increments and observing the impact against a representative set of your queries.

MAX_DATA_EXTENSION_TIME_IN_DAYS ¶

Maximum number of days for which Snowflake can extend the data retention period for tables to prevent streams on the tables from becoming stale. By default, if the DATA_RETENTION_TIME_IN_DAYS setting for a source table is less than 14 days, and a stream has not been consumed, Snowflake temporarily extends this period to the stream’s offset, up to a maximum of 14 days, regardless of the Snowflake Edition for your account. The MAX_DATA_EXTENSION_TIME_IN_DAYS parameter enables you to limit this automatic extension period to control storage costs for data retention or for compliance reasons.

This parameter can be set at the account, database, schema, and table levels. Note that setting the parameter at the account or schema level only affects tables for which the parameter has not already been explicitly set at a lower level (e.g. at the table level by the table owner). A value of 0 effectively disables the automatic extension for the specified database, schema, or table. For more information about streams and staleness, see Change Tracking Using Table Streams .

0 to 90 (i.e. 90 days) — a value of 0 disables the automatic extension of the data retention period. To increase the maximum value for tables in your account, contact Snowflake Support .

This parameter can cause data to be retained longer than the default data retention. Before increasing it, confirm that the new value fits your compliance requirements.

MULTI_STATEMENT_COUNT ¶

Integer (Constant)

SQL API, JDBC, .NET, ODBC

Number of statements to execute when using the multi-statement capability.

0 : Variable number of statements.

1 : One statement.

More than 1 : When MULTI_STATEMENT_COUNT is set as a session parameter, you can specify the exact number of statements to execute.

Negative numbers are not permitted.

MIN_DATA_RETENTION_TIME_IN_DAYS ¶

Minimum number of days for which Snowflake retains historical data for performing Time Travel actions (SELECT, CLONE, UNDROP) on an object. If a minimum number of days for data retention is set on an account, the data retention period for an object is determined by MAX( DATA_RETENTION_TIME_IN_DAYS , MIN_DATA_RETENTION_TIME_IN_DAYS).

For more information, see Understanding & Using Time Travel .

This parameter only applies to permanent tables and does not apply to the following objects:

Transient tables

Temporary tables

External tables

Materialized views

This parameter can only be set and unset by account administrators (i.e. users with the ACCOUNTADMIN role or other role that is granted the ACCOUNTADMIN role).

Setting the minimum data retention time does not alter any existing DATA_RETENTION_TIME_IN_DAYS parameter value set on databases, schemas, or tables. The effective retention time of a database, schema, or table is MAX(DATA_RETENTION_TIME_IN_DAYS, MIN_DATA_RETENTION_TIME_IN_DAYS).

NETWORK_POLICY ¶

Account — Can be set only for Account (can be set by account administrators and security administrators)

Object (for users) — Can be set for Account » User

Specifies the network policy to enforce for your account. Network policies enable restricting access to your account based on users’ IP address. For more details, see Controlling network traffic with network policies .

Any existing network policy (created using CREATE NETWORK POLICY )

This is the only account parameter that can be set by security administrators (i.e users with the SECURITYADMIN system role) or higher.

NOORDER_SEQUENCE_AS_DEFAULT ¶

Specifies whether the ORDER or NOORDER property is set by default when you create a new sequence or add a new table column.

The ORDER and NOORDER properties determine whether or not the values are generated for the sequence or auto-incremented column in increasing or decreasing order .

TRUE : When you create a new sequence or add a new table column, the NOORDER property is set by default.

NOORDER specifies that the values are not guaranteed to be in increasing order.

For example, if a sequence has START 1 INCREMENT 2, the generated values might be 1 , 3 , 101 , 5 , 103 , etc.

NOORDER can improve performance when multiple insert operations need to be performed concurrently (for example, when multiple clients are executing multiple INSERT statements).

FALSE : When you create a new sequence or add a new table column, the ORDER property is set by default.

ORDER specifies that the values generated for a sequence or auto-incremented column are in increasing order (or, if the interval is a negative value, in decreasing order).

For example, if a sequence or auto-incremented column has START 1 INCREMENT 2, the generated values might be 1 , 3 , 5 , 7 , 9 , etc.

If you set this parameter, the value that you set overrides the value in the 2024_01 behavior change bundle.

ODBC_TREAT_DECIMAL_AS_INT ¶

Specifies how ODBC processes columns that have a scale of zero ( 0 ).

TRUE : ODBC processes a column whose scale is zero as BIGINT.

FALSE : ODBC processes a column whose scale is zero as DECIMAL.

OAUTH_ADD_PRIVILEGED_ROLES_TO_BLOCKED_LIST ¶

Determines whether the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles can be used as the primary role when creating a Snowflake session based on the access token from Snowflake’s authorization server.

TRUE : Adds the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles to the BLOCKED_ROLES_LIST property of the Snowflake OAuth security integration, which means these roles cannot be used as the primary role when creating a Snowflake session using Snowflake OAuth.

FALSE : Removes the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN from the list of blocked roles defined by the BLOCKED_ROLES_LIST property of the Snowflake OAuth security integration.

PERIODIC_DATA_REKEYING ¶

This parameter only applies to Enterprise Edition (or higher). It enables/disables re-encryption of table data with new keys on a yearly basis to provide additional levels of data protection.

You can enable and disable rekeying at any time. Enabling/disabling rekeying does not result in gaps in your encrypted data:

If rekeying is enabled for a period of time and then disabled, all data already tagged for rekeying is rekeyed, but no further data is rekeyed until you re-enable it again. If rekeying is re-enabled, Snowflake automatically rekeys all data that has keys which meet the criteria (i.e. key is older than one year).

For more information about rekeying of encrypted data, see Understanding Encryption Key Management in Snowflake .

TRUE : Data is rekeyed after one year has passed since the data was last encrypted. Rekeying occurs in the background so no down-time is experienced and the affected data/table is always available.

FALSE : Data is not rekeyed.

There are charges associated with data rekeying because, after data is rekeyed, the old data (with the previous key encryption) is maintained in Fail-safe for the standard time period (7 days). For this reason, periodic rekeying is disabled by default. To enable periodic rekeying, you must explicitly enable it.

Also, Fail-safe charges for rekeying are not listed individually in your monthly statement; they are included in the Fail-safe total for your account each month.

For more information about Fail-safe, see Understanding and viewing Fail-safe .

PIPE_EXECUTION_PAUSED ¶

Object — Can be set for Account » Schema » Pipe

Specifies whether to pause a running pipe, primarily in preparation for transferring ownership of the pipe to a different role:

An account administrator (user with the ACCOUNTADMIN role) can set this parameter at the account level, effectively pausing or resuming all pipes in the account.

A user with the MODIFY privilege on a schema can pause or resume all pipes in the schema.

The pipe owner can set this parameter for a pipe.

Note that setting the parameter at the account or schema level only affects pipes for which the parameter has not already been explicitly set at a lower level (e.g. at the pipe level by the pipe owner).

This enables the practical use case in which an account administrator can pause all pipes at the account level, while a pipe owner can still have an individual pipe running.

TRUE : Pauses the pipe. When the parameter is set to this value, the SYSTEM$PIPE_STATUS function shows the executionState as PAUSED . Note that the pipe owner can continue to submit files to a paused pipe; however, the files are not processed until the pipe is resumed.

FALSE : Resumes the pipe, but only if ownership of the pipe has not been transferred while it was paused. When the parameter is set to this value, the SYSTEM$PIPE_STATUS function shows the executionState as RUNNING .

If ownership of the pipe was transferred to another role after the pipe was paused, this parameter cannot be used to resume the pipe. Instead, use the SYSTEM$PIPE_FORCE_RESUME function to explicitly force the pipe to resume.

This enables the new owner to use SYSTEM$PIPE_STATUS to evaluate the pipe status (e.g. determine how many files are waiting to be loaded) before resuming the pipe.

FALSE (pipes are running by default)

In general, pipes do not need to paused, except for transferring ownership.

PREVENT_UNLOAD_TO_INLINE_URL ¶

Specifies whether to prevent ad hoc data unload operations to external cloud storage locations (i.e. COPY INTO <location> statements that specify the cloud storage URL and access settings directly in the statement). For an example, see Unloading data from a table directly to files in an external location .

TRUE : COPY INTO <location> statements must reference either a named internal (Snowflake) or external stage or an internal user or table stage. A named external stage must store the cloud storage URL and access settings in its definition.

FALSE : Ad hoc data unload operations to external cloud storage locations are permitted.

PREVENT_UNLOAD_TO_INTERNAL_STAGES ¶

Specifies whether to prevent data unload operations to internal (Snowflake) stages using COPY INTO <location> statements.

TRUE : Unloading data from Snowflake tables to any internal stage, including user stages, table stages, or named internal stages is prevented.

FALSE : Unloading data to internal stages is permitted, limited only by the default restrictions of the stage type:

The current user can only unload data to their own user stage. Users can only unload data to table stages when their active role has the OWNERSHIP privilege on the table. Users can only unload data to named internal stages when their active role has the WRITE privilege on the stage.

QUERY_TAG ¶

String (up to 2000 characters)

Optional string that can be used to tag queries and other SQL statements executed within a session. The tags are displayed in the output of the QUERY_HISTORY , QUERY_HISTORY_BY_* functions.

QUOTED_IDENTIFIERS_IGNORE_CASE ¶

Object — Can be set for Account » Database » Schema » Table

Specifies whether letters in double-quoted object identifiers are stored and resolved as uppercase letters. By default, Snowflake preserves the case of alphabetic characters when storing and resolving double-quoted identifiers. (see Identifier resolution .) You can use this parameter in situations in which third-party applications always use double quotes around identifiers .

Changing this parameter from the default value can affect your ability to find objects that were previously created with double-quoted mixed case identifiers. Refer to Impact of changing the parameter .

When set on a table, schema, or database, the setting only affects the evaluation of table names in the bodies of views and user-defined functions (UDFs). If your account uses double-quoted identifiers that should be treated as case-insensitive and you plan to share a view or UDF with an account that treats double-quoted identifiers as case-sensitive , you can set this on the view or UDF that you plan to share. This allows the other account to resolve the table names in the view or UDF correctly.

TRUE : Letters in double-quoted identifiers are stored and resolved as uppercase letters.

FALSE : The case of letters in double-quoted identifiers is preserved. Snowflake resolves and stores the identifiers in the specified case.

For more information, see Identifier resolution .

For example:

REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_CREATION ¶

Specifies whether to require a storage integration object as cloud credentials when creating a named external stage (using CREATE STAGE ) to access a private cloud storage location.

TRUE : Creating an external stage to access a private cloud storage location requires referencing a storage integration object as cloud credentials.

FALSE : Creating an external stage does not require referencing a storage integration object. Users can instead reference explicit cloud provider credentials, such as secret keys or access tokens, if they have been configured for the storage location.

REQUIRE_STORAGE_INTEGRATION_FOR_STAGE_OPERATION ¶

Specifies whether to require using a named external stage that references a storage integration object as cloud credentials when loading data from or unloading data to a private cloud storage location.

TRUE : Loading data from or unloading data to a private cloud storage location requires using a named external stage that references a storage integration object; specifying a named external stage that references explicit cloud provider credentials, such as secret keys or access tokens, produces a user error.

FALSE : Users can load data from or unload data to a private cloud storage location using a named external stage that references explicit cloud provider credentials.

If PREVENT_UNLOAD_TO_INLINE_URL is FALSE, then users can specify the explicit cloud provider credentials directly in the COPY statement.

ROWS_PER_RESULTSET ¶

Specifies the maximum number of rows returned in a result set.

0 to any number (i.e. no limit) — a value of 0 specifies no maximum.

S3_STAGE_VPCE_DNS_NAME ¶

Specifies the DNS name of an Amazon S3 interface endpoint. Requests sent to the internal stage of an account via AWS PrivateLink for Amazon S3 use this endpoint to connect.

For more information, see Accessing Internal stages with dedicated interface endpoints .

Valid region-scoped DNS Name of an S3 interface endpoint.

The standard format begins with an asterisk ( * ) and ends with vpce.amazonaws.com (e.g. *.vpce-sd98fs0d9f8g.s3.us-west-2.vpce.amazonaws.com ). For more details about obtaining this value, refer to AWS configuration .

Alternative formats include bucket.vpce-xxxxxxxx.s3.<region>.vpce.amazonaws.com and vpce-xxxxxxxx.s3.<region>.vpce.amazonaws.com .

SAML_IDENTITY_PROVIDER ¶

Enables federated authentication. This deprecated parameter enables federated authentication. This parameter accepts a JSON object, enclosed in single quotes, with the following fields:

Specifies the certificate (generated by the IdP) that verifies communication between the IdP and Snowflake.

Indicates the Issuer/EntityID of the IdP.

For information on how to obtain this value in Okta and AD FS, see Migrating to a SAML2 security integration .

Specifies the URL endpoint (provided by the IdP) where Snowflake sends the SAML requests.

Specifies the type of IdP used for federated authentication ( "OKTA" , "ADFS" , "Custom" ).

Specifies the button text for the IdP in the Snowflake login page. The default label is Single Sign On . If you change the default label, the label you specify can only contain alphanumeric characters (i.e. special characters and blank spaces are not currently supported).

Note that, if the "type" field is "Okta" , a value for the label field does not need to be specified because Snowflake displays the Okta logo in the button.

For more information, including examples of setting the parameter, see Migrating to a SAML2 security integration .

SEARCH_PATH ¶

Specifies the path to search to resolve unqualified object names in queries. For more information, see Name Resolution in Queries .

Comma-separated list of identifiers. An identifier can be a fully or partially qualified schema name.

$current, $public

For more information about the detault settings, see default search path .

You cannot set this parameter within a client connection string, such as a JDBC or ODBC connection string. You must establish a session before setting a search path.

SIMULATED_DATA_SHARING_CONSUMER ¶

Specifies the name of a consumer account to simulate for testing/validating shared data, particularly shared secure views. When this parameter is set in a session, shared views return rows as if executed in the specified consumer account rather than the provider account.

Simulations only succeed when the current role is the owner of the view. If the current role does not own the view, simulations fail with the error:

For more information, see Introduction to Secure Data Sharing and Working with shares .

This is a session parameter, which means it can be set at the account level; however, it only applies to testing queries on shared views. Because the parameter affects all queries in a session, it should never be set at the account level.

SSO_LOGIN_PAGE ¶

This deprecated parameter disables preview mode for testing SSO (after enabling federated authentication) before rolling it out to users:

TRUE : Preview mode is disabled and users will see the button for Snowflake-initiated SSO for your identity provider (as specified in SAML_IDENTITY_PROVIDER ) in the Snowflake main login page.

FALSE : Preview mode is enabled and SSO can be tested using the following URL:

If your account is in US West: https://<account_identifier>.snowflakecomputing.com/console/login?fedpreview=true If your account is in any other region: https://<account_identifier>.<region_id>.snowflakecomputing.com/console/login?fedpreview=true

For more information, see:

Migrating to a SAML2 security integration

Account identifiers

STATEMENT_QUEUED_TIMEOUT_IN_SECONDS ¶

Session and Object (for warehouses)

Can be set for Account » User » Session; can also be set for individual warehouses

Amount of time, in seconds, a SQL statement (query, DDL, DML, etc.) remains queued for a warehouse before it is canceled by the system. This parameter can be used in conjunction with the MAX_CONCURRENCY_LEVEL parameter to ensure a warehouse is never backlogged.

The parameter can be set within the session hierarchy. It can also be set for a warehouse to control the queue timeout for all SQL statements processed by the warehouse. When the parameter is set for both a warehouse and a session, the lowest non-zero value is enforced. For example:

A warehouse has a queued timeout of 120 seconds.

The queued timeout for the session is set to 60 seconds.

The session timeout takes precedence (i.e. any statement submitted in the session is canceled after being queued for longer than 60 seconds).

For runs of tasks , the USER_TASK_TIMEOUT_MS task parameter takes precedence over the STATEMENT_QUEUED_TIMEOUT_IN_SECONDS setting.

When comparing the values of these two parameters, note that STATEMENT_QUEUED_TIMEOUT_IN_SECONDS is set in units of seconds, while USER_TASK_TIMEOUT_MS uses units of milliseconds.

For more information about USER_TASK_TIMEOUT_MS, see the Optional Parameters section of CREATE TASK .

0 to any number (i.e. no limit) — a value of 0 specifies that no timeout is enforced. A statement will remained queued as long as the queue persists.

0 (i.e. no timeout)

STATEMENT_TIMEOUT_IN_SECONDS ¶

Amount of time, in seconds, after which a running SQL statement (query, DDL, DML, etc.) is canceled by the system.

The parameter can be set within the session hierarchy. It can also be set for an individual warehouse to control the runtime for all SQL statements processed by the warehouse. When the parameter is set for both a warehouse and a session, the lowest non-zero value is enforced. For example:

A warehouse has a timeout of 1000 seconds.

The timeout for the session is set to 500 seconds.

The session timeout takes precedence (i.e. any statement submitted in the session is canceled after running for longer than 500 seconds).

For runs of tasks :

If a task relies on a virtual warehouse for its compute resources and STATEMENT_TIMEOUT_IN_SECONDS is set at the warehouse level, then the effective timeout is the smaller of the following parameters:

STATEMENT_TIMEOUT_IN_SECONDS

USER_TASK_TIMEOUT_MS (parameter set on the task)

Otherwise, the USER_TASK_TIMEOUT_MS task parameter takes precedence over the STATEMENT_TIMEOUT_IN_SECONDS setting for task runs.

When comparing the values of these two parameters, note that STATEMENT_TIMEOUT_IN_SECONDS is set in units of seconds, while USER_TASK_TIMEOUT_MS uses units of milliseconds.

0 to 604800 (i.e. 7 days) — a value of 0 specifies that the maximum timeout value is enforced.

172800 (i.e. 2 days)

STRICT_JSON_OUTPUT ¶

This parameter specifies whether JSON output in a session is compatible with the general standard (as described by http://json.org ).

By design, Snowflake allows JSON input that contains non-standard values; however, these non-standard values might result in Snowflake outputting JSON that is incompatible with other platforms and languages. This parameter, when enabled, ensures that Snowflake outputs valid/compatible JSON.

TRUE : Strict JSON output is enabled, enforcing the following behavior:

Missing and undefined values in input mapped to JSON NULL. Non-finite numeric values in input (Infinity, -Infinity, NaN, etc.) mapped to strings with valid JavaScript representations. This enables compatibility with JavaScript and also allows conversion of these values back to numeric values.

FALSE : Strict JSON output is not enabled.

SUSPEND_TASK_AFTER_NUM_FAILURES ¶

Object (for databases, schemas, and tasks) — Can be set for Account » Database » Schema » Task

Number of consecutive failed task runs after which a standalone task or task graph root task is suspended automatically. Failed task runs include runs in which the SQL code in the task body either produces a user error or times out. Task runs that are skipped, canceled, or that fail due to a system error are considered indeterminate and are not included in the count of failed task runs.

When the parameter is set to 0 , the failed task is not automatically suspended.

When the parameter is set to a value greater than 0 , the following behavior applies to runs of standalone tasks or task graph root tasks:

A standalone task is automatically suspended after the specified number of consecutive task runs either fail or time out.

A root task is automatically suspended after the run of any single task in a task graph fails or times out the specified number of times in consecutive runs.

The default value for the parameter is set to 10 , which means that the task is automatically suspended after 10 consecutive failed task runs.

When you explicitly set the parameter value at the account, database, or schema level, the change is applied to tasks contained in the modified object during their next scheduled run (including any child task in a task graph run in progress).

Suspending a standalone task resets its count of failed task runs. Suspending the root task of a task graph resets the count for each task in the task graph.

0 - No upper limit.

TIMESTAMP_DAY_IS_ALWAYS_24H ¶

Specifies whether the DATEADD function (and its aliases) always consider a day to be exactly 24 hours for expressions that span multiple days.

TRUE : A day is always exactly 24 hours.

FALSE : A day is not always 24 hours.

If set to TRUE , the actual time of day might not be preserved when daylight saving time (DST) is in effect. For example:

TIMESTAMP_INPUT_FORMAT ¶

Specifies the input format for the TIMESTAMP data type alias. For more information, see Date and Time Input and Output Formats .

Any valid, supported timestamp format or AUTO

( AUTO specifies that Snowflake attempts to automatically detect the format of timestamps stored in the system during the session)

TIMESTAMP_LTZ_OUTPUT_FORMAT ¶

Specifies the display format for the TIMESTAMP_LTZ data type. If no format is specified, defaults to TIMESTAMP_OUTPUT_FORMAT . For more information, see Date and Time Input and Output Formats .

Any valid, supported timestamp format

TIMESTAMP_NTZ_OUTPUT_FORMAT ¶

Specifies the display format for the TIMESTAMP_NTZ data type.

YYYY-MM-DD HH24:MI:SS.FF3

TIMESTAMP_OUTPUT_FORMAT ¶

Specifies the display format for the TIMESTAMP data type alias. For more information, see Date and Time Input and Output Formats .

YYYY-MM-DD HH24:MI:SS.FF3 TZHTZM

TIMESTAMP_TYPE_MAPPING ¶

Specifies the TIMESTAMP_* variation that the TIMESTAMP data type alias maps to.

TIMESTAMP_LTZ , TIMESTAMP_NTZ , or TIMESTAMP_TZ

TIMESTAMP_NTZ

TIMESTAMP_TZ_OUTPUT_FORMAT ¶

Specifies the display format for the TIMESTAMP_TZ data type. If no format is specified, defaults to TIMESTAMP_OUTPUT_FORMAT . For more information, see Date and Time Input and Output Formats .

Specifies the time zone for the session.

You can specify a time zone name or a link name from release 2021a of the IANA Time Zone Database (e.g. America/Los_Angeles , Europe/London , UTC , Etc/GMT , etc.).

America/Los_Angeles

Time zone names are case-sensitive and must be enclosed in single quotes (e.g. 'UTC' ).

Snowflake does not support the majority of timezone abbreviations (e.g. PDT , EST , etc.) because a given abbreviation might refer to one of several different time zones. For example, CST might refer to Central Standard Time in North America (UTC-6), Cuba Standard Time (UTC-5), and China Standard Time (UTC+8).

TIME_INPUT_FORMAT ¶

Specifies the input format for the TIME data type. For more information, see Date and Time Input and Output Formats .

Any valid, supported time format or AUTO

( AUTO specifies that Snowflake attempts to automatically detect the format of times stored in the system during the session)

TIME_OUTPUT_FORMAT ¶

Specifies the display format for the TIME data type. For more information, see Date and Time Input and Output Formats .

Any valid, supported time format

TRACE_LEVEL ¶

Controls how trace events are ingested into the event table. For more information about trace levels, see Setting trace level .

ALWAYS : All spans and trace events will be recorded in the event table.

ON_EVENT : Trace events will be recorded in the event table only when your stored procedures or UDFs explicitly add events.

OFF : No spans or trace events will be recorded in the event table.

When tracing events, you must also set the LOG_LEVEL parameter to one of its supported values.

TRANSACTION_ABORT_ON_ERROR ¶

Specifies the action to perform when a statement issued within a non-autocommit transaction returns with an error.

TRUE : The non-autocommit transaction is aborted. All statements issued inside that transaction will fail until a commit or rollback statement is executed to close that transaction.

FALSE : The non-autocommit transaction is not aborted.

TRANSACTION_DEFAULT_ISOLATION_LEVEL ¶

Specifies the isolation level for transactions in the user session.

READ COMMITTED (only currently-supported value)

READ COMMITTED

TWO_DIGIT_CENTURY_START ¶

Specifies the “century start” year for 2-digit years (i.e. the earliest year such dates can represent). This parameter prevents ambiguous dates when importing or converting data with the YY date format component (i.e. years represented as 2 digits).

1900 to 2100 (any value outside of this range returns an error)

UNSUPPORTED_DDL_ACTION ¶

Determines if an unsupported (i.e. non-default) value specified for a constraint property returns an error.

IGNORE : Snowflake does not return an error for unsupported values.

FAIL : Snowflake returns an error for unsupported values.

This parameter does not determine whether the constraint is created. Snowflake does not create constraints using unsupported values, regardless of how this parameter is set.

For more information, see Constraint Properties .

USE_CACHED_RESULT ¶

Specifies whether to reuse persisted query results, if available, when a matching query is submitted.

TRUE : When a query is submitted, Snowflake checks for matching query results for previously-executed queries and, if a matching result exists, uses the result instead of executing the query. This can help reduce query time because Snowflake retrieves the result directly from the cache.

FALSE : Snowflake executes each query when submitted, regardless of whether a matching query result exists.

USER_TASK_MANAGED_INITIAL_WAREHOUSE_SIZE ¶

Specifies the size of the compute resources to provision for the first run of the task, before a task history is available for Snowflake to determine an ideal size. Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. If the task history is unavailable for a given task, the compute resources revert to this initial size.

This parameter applies only to serverless tasks .

The size is equivalent to the compute resources available when creating a warehouse. If the parameter is omitted, the first runs of the task are executed using a medium-sized ( MEDIUM ) warehouse.

You can change the initial size for individual tasks (using ALTER TASK ) after the task is created but before it has run successfully once. Changing the parameter after the first run of this task starts has no effect on the compute resources for current or future task runs.

Note that suspending and resuming a task does not remove the task history used to size the compute resources. The task history is only removed if the task is recreated (using the CREATE OR REPLACE TASK syntax).

Any traditional warehouse size: SMALL , MEDIUM , LARGE , etc., with a maximum size of XXLARGE .

USER_TASK_TIMEOUT_MS ¶

Specifies the time limit on a single run of the task before it times out (in milliseconds).

Before you increase the time limit for tasks significantly, consider whether the SQL statements in the task definitions could be optimized (either by rewriting the statements or using stored procedures) or whether the warehouse size for tasks with user-managed compute resources should be increased.

In some situations, the STATEMENT_TIMEOUT_IN_SECONDS parameter has a higher precedence than USER_TASK_TIMEOUT_MS. For details, see STATEMENT_TIMEOUT_IN_SECONDS .

0 - 86400000 (1 day).

3600000 (1 hour)

WEEK_OF_YEAR_POLICY ¶

Specifies how the weeks in a given year are computed.

0 : The semantics used are equivalent to the ISO semantics, in which a week belongs to a given year if at least 4 days of that week are in that year.

1 : January 1 is included in the first week of the year and December 31 is included in the last week of the year.

0 (i.e. ISO-like behavior)

1 is the most common value, based on feedback we’ve received. For more information, including examples, see Calendar Weeks and Weekdays .

WEEK_START ¶

Specifies the first day of the week (used by week-related date functions).

0 : Legacy Snowflake behavior is used (i.e. ISO-like semantics).

1 (Monday) to 7 (Sunday): All the week-related functions use weeks that start on the specified day of the week.

0 (i.e. legacy Snowflake behavior)

IMAGES

  1. Introduction to Time Travel in Snowflake

    snowflake time travel setting

  2. EXPLORE TIME TRAVEL IN SNOWFLAKE

    snowflake time travel setting

  3. Time Travel with Snowflake

    snowflake time travel setting

  4. Time Travel in Snowflake

    snowflake time travel setting

  5. Understanding & Using Time Travel

    snowflake time travel setting

  6. Snowflake Time Travel

    snowflake time travel setting

VIDEO

  1. Snowflake Educational Series

  2. Difference and Similarities between Transietnt and Temporary Tables in Snowflake

  3. Why Snowflake?

  4. Terraform

  5. Snowflake Day 2 Demo

  6. Snowflake

COMMENTS

  1. Understanding & Using Time Travel

    Snowflake Time Travel enables accessing historical data (i.e. data that has been changed or deleted) at any point within a defined period. ... If the Time Travel retention period is set to 0, any modified or deleted data is moved into Fail-safe (for permanent tables) or deleted (for transient tables) by a background process. This may take a ...

  2. Getting Started with Time Travel

    Get Started With the Essentials. First things first, let's get your Snowflake account and user permissions primed to use Time Travel features. Create a Snowflake Account. Snowflake lets you try out their services for free with a . A account allows for one day of Time Travel data retention, and an account allows for 90 days of data retention.

  3. AT

    The AT or BEFORE clause is used for Snowflake Time Travel. In a query, it is specified in the FROM clause immediately after the table name, and it determines the point in the past from which historical data is requested for the object: The AT keyword specifies that the request is inclusive of any changes made by a statement or transaction with ...

  4. Snowflake Time Travel & Fail-safe

    Snowflake Time Travel & Fail-safe. Querying, cloning, and restoring historical data in tables, schemas, and databases for up to 90 days through Snowflake Time Travel. Disaster recovery of historical data (by Snowflake) through Snowflake Fail-safe. These features are included standard for all accounts, i.e. no additional licensing is required ...

  5. Using Snowflake Time Travel: A Comprehensive Guide

    The Time Travel feature allows database administrators to query historical data, clone old tables, and restore objects dropped in the past. However, given the massive sizes databases can grow into, Time Travel is not a drop-in equivalent of code version control. Every time a table is modified (deleted or updated), Snowflake takes a snapshot of ...

  6. Snowflake Time Travel: The Ultimate Guide 101

    Snowflake Time Travel is an interesting tool that allows you to access data from any point in the past. For example, if you have an Employee table, and you inadvertently delete it, you can utilize Time Travel to go back 5 minutes and retrieve the data. Snowflake Time Travel allows you to Access Historical Data (that is, data that has been ...

  7. How to Use Time Travel in Snowflake the Right Way

    Time Travel Settings. You set the period of a time travel window through the DATA_RETENTION_TIME_IN_DAYS parameter value. Time travel applies hierarchically to the account level, or to a database, schema, or table level. ... Snowflake supports time travel retention of up to 90 days. But you can make a Zero Copy Clone every 3 months, to preserve ...

  8. Leveraging Time Travel in Snowflake: A Guide

    Nov 28, 2023. Learn how to leverage Snowflake's Time Travel feature in conjunction with DbVisualizer to effortlessly explore historical data, restore tables to previous states, and track changes ...

  9. How to Leverage the Time Travel Feature on Snowflake

    Snowflake Standard accounts (and above) can remove Time Travel retention altogether by setting the retention period to 0 days, effectively disabling Time Travel. Snowflake Enterprise accounts (and above) can set the Time Travel retention period for transient databases, schemas, tables, and temporary tables to either 0 or 1 day.

  10. How do I verify the Snowflake Time Travel setting?

    The default value is 1 (even for Enterprise Edition). As you know, you can set different retention values for databases, schemas and tables. To see the value of the parameter for your account, please use the following command:

  11. Using Time Travel to validate correctness of a query or a recent DML

    Use Time Travel to validate the table state, just before and after the DML. If the underlying table is still within time travel retention it is quite easy to query the table using Time Travel feature both before and after the DML and compare both the table/data states. Please refer to the links in the "Additional Information" section for ...

  12. Querying time travel data in a session with a non-UTC timezone

    When a session inherits or sets a non-UTC timezone setting, queries that use time travel feature with AT <timestamp> clause may require careful consideration of TIMESTAMP type casting and timezone offset usage. ... How To: Test Azure OAuth Connection To Snowflake End To End Using Python (Client Credentials Flow)

  13. Overview of Snowflake Time Travel

    7. How to set custom Time-Travel Data Retention period for Snowflake Objects? Time travel is automatically enabled with the standard, 1-day retention period. However, you may wish to upgrade to Snowflake Enterprise Edition or higher to enable configuring longer data retention periods of up to 90 days for databases, schemas, and tables.

  14. How to manage GDPR compliance with Snowflake's Time Travel and Disaster

    By default, Time Travel is always on for Snowflake customers and is set to 24 hours, although enterprise customers have the capability to set Time Travel for any window up to 90 days. Fail-Safe. If you accidentally drop a table or database and if the Time Travel window has passed, Snowflake offers a "get out of jail free" card called Fail-Safe.

  15. Understanding and viewing Fail-safe

    Fail-safe provides a (non-configurable) 7-day period during which historical data may be recoverable by Snowflake. This period starts immediately after the Time Travel retention period ends. Note, however, that a long-running Time Travel query will delay moving any data and objects (tables, schemas, and databases) in the account into Fail-safe ...

  16. EXPLORE TIME TRAVEL IN SNOWFLAKE

    In all Snowflake editions, It is set to 1 day by default for all objects. This parameter can be extended to 90 days for Enterprise and Business-Critical editions. The parameter "DATA RETENTION PERIOD" controls an object's time travel capability. Once the time travel duration is exceeded the object enters the Fail-safe region.

  17. Snowflake Time Travel in a Nutshell

    A key component of Snowflake Time Travel is the data retention period. ... the data retention period is set to 1 day and Snowflake recommends keeping this setting as is to be able to prevent ...

  18. Working with Temporary and Transient Tables

    If the Time Travel retention period for a permanent table is set to 0, it will immediately enter the Fail-safe period when it is dropped. Temporary tables can have a Time Travel retention period of 1 day; however, a temporary table is purged once the session (in which the table was created) ends so the actual retention period is for 24 hours or ...

  19. How to query time travel with a time other than the default (UTC)

    Solution. To be able to use a local timezone for a time travel query the timestamp first needs to be converted to the TIMESTAMP_LTZ format with the correct timezone offset. This will ensure that a query in a session inheriting or setting a non-UTC timezone can retrieve time travel data at the desired timestamp. For example, EST is 5 hours ...

  20. Parameters

    A value of 0 effectively disables Time Travel for the specified database, schema, or table. For more information, see Understanding & Using Time Travel. Values: 0 or 1 (for Standard ... Once a task has successfully completed a few runs, Snowflake ignores this parameter setting. If the task history is unavailable for a given task, the compute ...

  21. Snowflake

    How do I verify the Snowflake Time Travel setting? 1. Two question about Time Travel storage-costs in snowflake. 6. Does CREATE OR REPLACE statement affect time travel in snowflake? Hot Network Questions Is there any way to read current Interrupt Mode in Z80 machine code?