Hands-on Practice
SQL Practice
Write real Snowflake SQL. Each exercise covers an exam-relevant topic with hints, model solutions, and explanations. Filter by topic or difficulty to target your weak areas.
Tip: Type your answer in the editor and click Check Answer. Use hints progressively before peeking at the solution.
You need a new XS warehouse for development use that auto-suspends after 5 minutes.
Write the SQL to create a warehouse called DEV_WH that is X-Small, auto-suspends after 300 seconds, and auto-resumes.
You are building a new analytics database for the marketing team.
Create a database called MARKETING_DB, then create a schema called REPORTS inside it.
A colleague accidentally deleted some orders yesterday. You need to check what the ORDERS table contained 24 hours ago.
Write a SELECT query to retrieve all rows from ORDERS as they existed 24 hours ago using Time Travel.
You want to create a development copy of the PRODUCTION_ORDERS table for testing without duplicating storage.
Create a zero-copy clone of PRODUCTION_ORDERS called DEV_ORDERS.
You need to create a new ANALYST role and give it read-only access to the SALES table.
Create the ANALYST role, grant USAGE on DATABASE CORP_DB and SCHEMA CORP_DB.PUBLIC, then grant SELECT on SALES to ANALYST.
Your team creates new tables in the ANALYTICS schema regularly. You want the ANALYST role to automatically get SELECT on every new table without manual grants.
Write the SQL to grant SELECT on all future tables in ANALYTICS schema to the ANALYST role.
A new data engineer named Sarah is joining the team.
Create a Snowflake user called SARAH_JONES with password 'Temp1234!', default role SYSADMIN, and default warehouse COMPUTE_WH.
You need a named internal stage for loading CSV files into the SALES table.
Create a named internal stage called MY_CSV_STAGE with a CSV file format (comma delimiter, skip header row).
Files have been uploaded to MY_CSV_STAGE. Load them into the ORDERS table, skipping files with errors, and delete files after loading.
Write a COPY INTO statement to load from MY_CSV_STAGE into ORDERS, with ON_ERROR = SKIP_FILE and PURGE = TRUE.
You want to automatically ingest CSV files that land in an external stage MY_S3_STAGE into the EVENTS table.
Create a Snowpipe called EVENTS_PIPE with auto_ingest enabled that runs COPY INTO EVENTS FROM @MY_S3_STAGE.
A table EVENTS has a VARIANT column called PAYLOAD containing: {"user_id": 42, "action": "click", "metadata": {"page": "/home"}}
Write a SELECT that extracts user_id (as INTEGER), action (as STRING), and the nested page from metadata.
A PRODUCTS table has a VARIANT column TAGS containing an array: ["electronics", "laptop", "portable"]. You need one row per tag.
Write a query using FLATTEN to explode the TAGS array into individual rows, returning product_id and the tag value as a string.
You want to capture all changes (INSERT, UPDATE, DELETE) to the ORDERS table for a downstream CDC pipeline.
Create a standard stream called ORDERS_STREAM on the ORDERS table.
You have ORDERS_STREAM with new inserts. Merge new orders into the ORDERS_SUMMARY table (upsert by order_id).
Write a MERGE statement that inserts new orders from ORDERS_STREAM into ORDERS_SUMMARY when order_id doesn't exist, or updates amount when it does.
You want to run a stored procedure REFRESH_SUMMARY every 15 minutes.
Create a task called REFRESH_TASK using COMPUTE_WH that calls CALL REFRESH_SUMMARY() every 15 minutes. Remember to resume it.
Queries on the TRANSACTIONS table are slow because they filter heavily on the ORDER_DATE column and the table is multi-terabyte.
Add a clustering key on ORDER_DATE to the TRANSACTIONS table.
Users frequently look up individual customer records by CUSTOMER_ID in a 500M row CUSTOMERS table. Clustering doesn't help because it's a point lookup.
Enable Search Optimisation on the CUSTOMER_ID column of the CUSTOMERS table for equality lookups.
The CUSTOMERS table was dropped accidentally 2 hours ago. Time Travel retention is 24 hours.
Write the SQL to restore the dropped CUSTOMERS table.
You want to share the REVENUE_SUMMARY view with a partner account (partner_account_id: PARTNER123).
Create a share called REVENUE_SHARE, grant USAGE on DATABASE ANALYTICS_DB, USAGE on SCHEMA PUBLIC, SELECT on VIEW REVENUE_SUMMARY, then add PARTNER123 to the share.
EMAIL column should be fully visible to ANALYST role but masked as '****@****.com' for all other roles.
Create a masking policy called EMAIL_MASK that shows the real value to ANALYST role, and shows '****@****.com' to all others. Then apply it to the EMAIL column on CUSTOMERS.