Skip to main content
✏️

Hands-on Practice

SQL Practice

Write real Snowflake SQL. Each exercise covers an exam-relevant topic with hints, model solutions, and explanations. Filter by topic or difficulty to target your weak areas.

Warehouses RBAC Data Loading Semi-Structured Streams & Tasks Data Protection

Tip: Type your answer in the editor and click Check Answer. Use hints progressively before peeking at the solution.

Create a Virtual WarehouseeasyWarehouses
Scenario

You need a new XS warehouse for development use that auto-suspends after 5 minutes.

Your Task

Write the SQL to create a warehouse called DEV_WH that is X-Small, auto-suspends after 300 seconds, and auto-resumes.

Your SQL
Database and Schema SetupeasyDatabases & Schemas
Scenario

You are building a new analytics database for the marketing team.

Your Task

Create a database called MARKETING_DB, then create a schema called REPORTS inside it.

Your SQL
Query Historical DataeasyTime Travel
Scenario

A colleague accidentally deleted some orders yesterday. You need to check what the ORDERS table contained 24 hours ago.

Your Task

Write a SELECT query to retrieve all rows from ORDERS as they existed 24 hours ago using Time Travel.

Table: ORDERS (order_id, customer_id, amount, created_at)
Your SQL
Zero-Copy CloneeasyCloning
Scenario

You want to create a development copy of the PRODUCTION_ORDERS table for testing without duplicating storage.

Your Task

Create a zero-copy clone of PRODUCTION_ORDERS called DEV_ORDERS.

Your SQL
Create Role and Grant PrivilegeseasyRBAC
Scenario

You need to create a new ANALYST role and give it read-only access to the SALES table.

Your Task

Create the ANALYST role, grant USAGE on DATABASE CORP_DB and SCHEMA CORP_DB.PUBLIC, then grant SELECT on SALES to ANALYST.

Database: CORP_DB | Schema: CORP_DB.PUBLIC | Table: SALES
Your SQL
Future GrantsmediumRBAC
Scenario

Your team creates new tables in the ANALYTICS schema regularly. You want the ANALYST role to automatically get SELECT on every new table without manual grants.

Your Task

Write the SQL to grant SELECT on all future tables in ANALYTICS schema to the ANALYST role.

Database: CORP_DB | Schema: CORP_DB.ANALYTICS
Your SQL
Create a UsereasyUsers
Scenario

A new data engineer named Sarah is joining the team.

Your Task

Create a Snowflake user called SARAH_JONES with password 'Temp1234!', default role SYSADMIN, and default warehouse COMPUTE_WH.

Your SQL
Create a Named StageeasyStages
Scenario

You need a named internal stage for loading CSV files into the SALES table.

Your Task

Create a named internal stage called MY_CSV_STAGE with a CSV file format (comma delimiter, skip header row).

Your SQL
Load Data from a StagemediumCOPY INTO
Scenario

Files have been uploaded to MY_CSV_STAGE. Load them into the ORDERS table, skipping files with errors, and delete files after loading.

Your Task

Write a COPY INTO statement to load from MY_CSV_STAGE into ORDERS, with ON_ERROR = SKIP_FILE and PURGE = TRUE.

Stage: MY_CSV_STAGE | Table: ORDERS
Your SQL
Create a SnowpipemediumSnowpipe
Scenario

You want to automatically ingest CSV files that land in an external stage MY_S3_STAGE into the EVENTS table.

Your Task

Create a Snowpipe called EVENTS_PIPE with auto_ingest enabled that runs COPY INTO EVENTS FROM @MY_S3_STAGE.

Your SQL
Query JSON in a VARIANT ColumnmediumSemi-Structured
Scenario

A table EVENTS has a VARIANT column called PAYLOAD containing: {"user_id": 42, "action": "click", "metadata": {"page": "/home"}}

Your Task

Write a SELECT that extracts user_id (as INTEGER), action (as STRING), and the nested page from metadata.

Table: EVENTS(event_id INT, payload VARIANT)
Your SQL
Flatten an ArrayhardSemi-Structured
Scenario

A PRODUCTS table has a VARIANT column TAGS containing an array: ["electronics", "laptop", "portable"]. You need one row per tag.

Your Task

Write a query using FLATTEN to explode the TAGS array into individual rows, returning product_id and the tag value as a string.

Table: PRODUCTS(product_id INT, tags VARIANT)
Your SQL
Create a StreammediumStreams
Scenario

You want to capture all changes (INSERT, UPDATE, DELETE) to the ORDERS table for a downstream CDC pipeline.

Your Task

Create a standard stream called ORDERS_STREAM on the ORDERS table.

Table: ORDERS
Your SQL
Consume a Stream with MERGEmediumStreams
Scenario

You have ORDERS_STREAM with new inserts. Merge new orders into the ORDERS_SUMMARY table (upsert by order_id).

Your Task

Write a MERGE statement that inserts new orders from ORDERS_STREAM into ORDERS_SUMMARY when order_id doesn't exist, or updates amount when it does.

Stream: ORDERS_STREAM(order_id, customer_id, amount, METADATA$ACTION) Table: ORDERS_SUMMARY(order_id, customer_id, total_amount)
Your SQL
Create a Scheduled TaskmediumTasks
Scenario

You want to run a stored procedure REFRESH_SUMMARY every 15 minutes.

Your Task

Create a task called REFRESH_TASK using COMPUTE_WH that calls CALL REFRESH_SUMMARY() every 15 minutes. Remember to resume it.

Your SQL
Add a Clustering KeymediumClustering
Scenario

Queries on the TRANSACTIONS table are slow because they filter heavily on the ORDER_DATE column and the table is multi-terabyte.

Your Task

Add a clustering key on ORDER_DATE to the TRANSACTIONS table.

Table: TRANSACTIONS(tx_id, order_date DATE, amount, customer_id)
Your SQL
Enable Search OptimisationhardSearch Optimisation
Scenario

Users frequently look up individual customer records by CUSTOMER_ID in a 500M row CUSTOMERS table. Clustering doesn't help because it's a point lookup.

Your Task

Enable Search Optimisation on the CUSTOMER_ID column of the CUSTOMERS table for equality lookups.

Table: CUSTOMERS(customer_id VARCHAR, name, email, ...)
Your SQL
Restore a Dropped TableeasyTime Travel
Scenario

The CUSTOMERS table was dropped accidentally 2 hours ago. Time Travel retention is 24 hours.

Your Task

Write the SQL to restore the dropped CUSTOMERS table.

Your SQL
Create a Data SharemediumData Sharing
Scenario

You want to share the REVENUE_SUMMARY view with a partner account (partner_account_id: PARTNER123).

Your Task

Create a share called REVENUE_SHARE, grant USAGE on DATABASE ANALYTICS_DB, USAGE on SCHEMA PUBLIC, SELECT on VIEW REVENUE_SUMMARY, then add PARTNER123 to the share.

Database: ANALYTICS_DB | Schema: PUBLIC | View: REVENUE_SUMMARY
Your SQL
Create a Masking PolicyhardDynamic Data Masking
Scenario

EMAIL column should be fully visible to ANALYST role but masked as '****@****.com' for all other roles.

Your Task

Create a masking policy called EMAIL_MASK that shows the real value to ANALYST role, and shows '****@****.com' to all others. Then apply it to the EMAIL column on CUSTOMERS.

Table: CUSTOMERS(customer_id, name, email VARCHAR)
Your SQL