# Knock Documentation # Getting started A technical and non-technical introduction to the basics of Knock, and a step-by-step guide to get you going in minutes. ## What is Knock? Learn more about what Knock does and how it helps power your product notifications. --- title: What is Knock? description: Learn more about what Knock does and how it helps power your product notifications. tags: ["getting started", "explainer", "explained"] section: Getting started --- Knock is notifications infrastructure that helps you implement notifications your users will love, without the effort of building and maintaining your own in-house notifications system. In this overview, we’ll cover some of the foundational concepts of Knock. Knock is designed with both developers and product teams in mind: it’s easy for developers to implement quickly, and simple for less-technical users to maintain with our intuitive dashboard.
## Workflows Workflows are a foundational concept in Knock. They allow you to easily model complex messaging flows across channels using a variety of logical function steps while respecting a user’s individual preferences. All Knock notifications are sent by triggering a workflow. An image of a workflow diagram Your application can trigger workflows using our REST API, any one of our [available SDKs](/sdks/overview), or by integrating a CDP like Segment as an event source. You can use the dropdown menu on the code sample below to look at a sample in your language of choice: Knock processes each workflow run using a combination of the following concepts: ## Recipients Recipients are in most cases users in your application. As you trigger workflows for recipients, Knock creates a cache of the data needed to notify them on different platforms, like an email address, phone number, avatar URL, or push token. Knock also stores custom properties you pass from your application to customize their notifications, like a plan type, user role, or timezone. ```javascript title="An object used to create a User" { // Id is a required prop id: "1", // Knock also supports default props for common channels name: "John Hammond", email: "hammondj@ingen.net", phone_number: "555-555-5555", avatar: "https://ingen.net/headshots/hammondj.jpg", timezone: "America/Costa_Rica" // You can add as many custom props as needed. These will be // merged onto the top-level User object properties: { "title": "CEO", "planType": "allAccess", "userType": "admin" } } ``` ## Channels Channels in Knock represent a specific provider you have configured to send notifications. You can include channel steps in your workflows to send notifications with the providers you already use in production. Knock supports the following channel types and providers: Knock supports sending email with [AWS SES](/integrations/email/aws-ses), [Mailersend](/integrations/email/mailersend), [Mailgun](/integrations/email/mailgun), [Mailjet](/integrations/email/mailjet), [Mailtrap](/integrations/email/mailtrap), [Mandrill](/integrations/email/mandrill), [Postmark](/integrations/email/postmark), [Resend](/integrations/email/resend), [Sendgrid](/integrations/email/sendgrid), [SMTP](/integrations/email/smtp), and [Sparkpost](/integrations/email/sparkpost). Knock supports sending SMS with [Africa's Talking](/integrations/sms/africas-talking), [AWS SNS](/integrations/sms/aws-sns), [Mailersend](/integrations/sms/mailersend), [MessageBird](/integrations/sms/messagebird), [Plivo](/integrations/sms/plivo), [Sinch](/integrations/sms/sinch), [Sinch MessageMedia](/integrations/sms/sinch-message-media), [Telnyx](/integrations/sms/telnyx), [Twilio](/integrations/sms/twilio), and [Vonage](/integrations/sms/vonage). Knock supports sending push messages with [Apple Push Notification Service (iOS)](/integrations/push/apns), [Expo (React Native)](/integrations/push/expo), [Firebase Cloud Messaging (Android)](/integrations/push/firebase), and [OneSignal](/integrations/push/one-signal). Knock supports sending chat messages with [Slack](/integrations/chat/slack), [Discord](/integrations/chat/discord), [Microsoft Teams](/integrations/chat/microsoft-teams), [WhatsApp](/integrations/chat/whatsapp). Knock provides [a real-time in-app feed API](/integrations/in-app/knock) for receiving notifications, along with drop-in components to display them to your users. A notification that is generated as a part of a workflow is called a `Message`, and Knock allows you to define dynamic message templates using a combination of a drag-and-drop editor and the Liquid templating language. This helps product and marketing teams standardize on one templating system instead of using different templating languages for different providers. It also has the added benefit of lifting these messages out of your codebase so you can iterate quickly on customer communications without a developer. ## Functions Each workflow can combine multiple function steps to model complex logic that creates better notification experiences. You can combine the following function steps with any number of channel steps to create personalized notifications for your users: [A batch step](/designing-workflows/batch-function) condenses multiple activities into one notification, e.g. batch all of the comments on this document for one hour and then send one email with all of the activities. [A delay step](/designing-workflows/delay-function) waits for a specified duration before proceeding to the next step in a workflow, e.g. send the new user a follow-up email ten days after they sign up. [A branch step](/designing-workflows/branch-function) uses multiple conditions to execute different branches of logic, e.g. if `user.planType === 'pro'` send them email A, else send them email B. [A throttle step](/designing-workflows/throttle-function) controls how many times a user is notified for a particular workflow over a specified duration, e.g. trigger the `server is down` workflow every minute while the server is down, but only send a max of one email every 5 minutes. [A fetch step](/designing-workflows/fetch-function) makes an HTTP request to an external service and uses the returned data in subsequent steps, e.g. query an MLS API for recent home sales in the user’s zip code and render them in an email. [A trigger workflow step](/designing-workflows/trigger-workflow-function) allows you to trigger another workflow from within the current workflow, e.g. trigger a "welcome_sequence" workflow after recieving an "account_setup" notification. In addition to combining channel steps and function steps to create complex workflows, you can augment these steps with additional logic based on the user recipient, inputs from your application, or the status of previous workflow steps. These are called step conditions. ## Step conditions Step conditions exist across both channel and function steps, and they allow you to conditionally execute steps based on trigger payload data, user properties, or the status of previous steps **Examples:** - Only send an email message if an in-app message has not been seen - Only send an in-app notification if `recipient.plan === "pro"` - Only execute a delay step if `delay === true` in the trigger payload In addition to giving your technical and non-technical users the ability to construct these workflows via a drag-and-drop editor, Knock also enables your users to exercise control over their own notification experience using a flexible preferences model. ## Preferences In Knock, each workflow run is executed on behalf of a recipient, and each recipient can specify their preferences to receive notifications across a number of different criteria: channel types, individual workflows, and workflow categories. An image of a preference set Application developers have control over how these preference sets are presented to the user and which options to surface, but Knock enforces these preferences during every workflow run automatically. You can learn more about how to set a user's preferences in our [preferences overview](/preferences/overview). ## Next steps Now that you understand some of the core concepts of Knock, you can either start building with Knock or explore some of the more advanced features Knock offers. ### Build something If you want to start by adding Knock to your existing system, you can check out our quick start guide to implement your first workflow. This quick start will help you integrate Knock with your backend codebase. If you want to keep learning about Knock using a curated example application, check out our catalog of [examples apps](/getting-started/example-apps). Here are some recommendations: ### Keep learning While the workflow engine is at the heart of Knock, our goal is to build a complete notification system for our customers. Here is an overview of some of the more-advanced features that we provide: #### UI components Knock provides developers with React components like `` and `` to use in their applications. You can read more about building in-app UI with Knock for both web and mobile [here](/in-app-ui/overview). #### Advanced concepts There is a lot more to learn about Knock, and our [concepts overview page](/concepts/overview) is a good place to start. Here are use cases our customers commonly solve with Knock: - Powering [translation and localization](/concepts/translations) and managing timezone-aware delivery - Creating advanced notification logic using [subscriptions](/concepts/subscriptions) and [schedules](/concepts/schedules) - Integrating Knock with your application's data model, using [tenants](/concepts/tenants) and [objects](/concepts/objects) to power customized experiences - Using [the template editor](/designing-workflows/template-editor/overview) to standarize messaging templates across providers #### Developer tools Knock is a developer-first platform, with both [environment](/concepts/environments) and [commit models](/concepts/commits). If you want to work with Knock resources in code, you can use our [Management API](/developer-tools/management-api) or [CLI](/developer-tools/knock-cli). Once you're sending notifications through Knock, we offer observability tools like [workflow run logs](/send-notifications/debugging-workflows) (to examine all steps of workflow execution in your dashboard) and data streaming into a monitoring system like Datadog with [extensions](/integrations/extensions/overview). ## Quick start Quickly get up and running with Knock. --- title: Get started with Knock description: Quickly get up and running with Knock. tags: ["getting started"] section: Getting started --- In this guide, you'll integrate Knock with your backend web application and send your first notification using Knock. First, [create a Knock account](https://dashboard.knock.app/signup) if you don't already have one and log into the [Knock dashboard](https://dashboard.knock.app). We have SDKs available in [most major languages](/sdks/overview#server-side-sdks). Don't see your language listed here? [Let us know](mailto:support@knock.app)! You can find your public and secret API keys under the **Developers** section of the Knock dashboard. Since we're working on the backend here, you'll want to use the secret key. As a best practice, your API key should be set as an environment variable and should not be checked into source control. ```bash KNOCK_API_KEY='sk_example_123456789' ``` Next we'll design our first workflow in Knock via the dashboard. A workflow encapsulates a notification in Knock. Each workflow takes a trigger call via the Knock API, runs the data you provide through a set of logic you configure, and outputs the actual messages that will be sent to your end users. All channel routing and message design takes place within the workflow. Here's how to build your first workflow: Click the "+ Workflow" button in the top right corner of the Knock dashboard. Name it whatever you like. To send a notification, a workflow needs at least one [channel step](/designing-workflows/channel-step). To add this step, we'll click “edit steps” to enter the workflow canvas editor. Here we can see a number of steps available for us to add to our workflow, including functions (such as [batch](/designing-workflows/batch-function) and [delay](/designing-workflows/delay-function)) and channels. Choose the delivery channel you'd like to use in your workflow and drag it onto the workflow canvas. After adding a channel step, we can configure the notification's content by clicking on "Edit template" in the channel's edit step view to see that step's [message template](/designing-workflows/template-editor/overview). The template starts with default copy, so we'll just use that for now. Before we leave the workflow canvas and head back to your backend, let’s click on the trigger step to grab a payload data sample to use when we call Knock. This sample payload is auto-generated when you create a workflow within the Knock dashboard. It gives us the JSON blob we'll need to pass through as `data` in our trigger call in order to populate any of the custom properties defined in our workflow. Knock follows a versioning model similar to Git. This means that before you can trigger your new workflow via the API, you'll need to commit it to your current environment to activate the workflow. Click the back arrow in the top-left corner of the workflow canvas to get back to the workflow overview page, where you can commit your changes. Now we're ready to trigger our workflow via the Knock API. You can also learn more about workflows and channels in Knock via our [guide on designing workflows](/send-notifications/designing-workflows). Now, you'll trigger your workflow to notify a set of users. When triggering workflows, you need to provide the following required pieces of data in your call to the Knock API: - `recipients` – The list of users to notify. - `data` – The variable data that will populate your notification templates. Here you'll use the sample data payload we grabbed in step 3. In the example below, we trigger a new comment notification workflow for two project members, using [inline identification](/reference#trigger-workflow-inline-identify). Learn more about trigger calls in our [API reference](/reference#trigger-workflow). Knock uses [logically separated environments](/concepts/environments) to control the roll-out of your notifications. When you're happy with the way your workflows work and look, you just need to promote them to production to start sending notifications to your real users. See our [going to production](/guides/implementation-guide#going-to-production) checklist to review a complete set of steps you'll need to take to push your workflows to production. This was a simple overview to send your first notification with Knock. Read on to see how Knock can drive your notification needs, no matter their complexity. - [Learn about Knock's core data concepts](/concepts/overview) - [Learn how to set up a real-time, in-app notification feed in minutes](/notification-feeds/getting-started) ## Example apps Example applications to help you get started with Knock. --- title: Knock example apps description: Example applications to help you get started with Knock. tags: ["nodejs", "using knock", "getting started", "react"] section: Getting started --- Below you'll find a number of Knock example apps to learn from or incorporate into your project. ## In-app notification examples (web) ## Web app examples ## Mobile examples # Concepts Learn about the key concepts in Knock. ## Overview Learn about the key concepts in Knock. --- title: Core concepts description: Learn about the key concepts in Knock. tags: ["how knock works"] section: Concepts --- ## Workflows In Knock, all notifications are sent via a workflow. Each workflow acts as a container for the logic and templates that are associated with a type of notification in your system. [Learn more →](/concepts/workflows) ## Channels A channel in Knock represents a configured provider, such as Sendgrid for email, to send notifications to your recipients. Most providers within Knock use credentials that you supply to deliver notifications on your behalf. These credentials and other settings are what make a configured channel. [Learn more →](/concepts/channels) ## Commits Knock uses a commit model to version changes that you make to all of your Knock resources. When you make a change to a workflow or a layout in the Knock dashboard, you'll need to commit it to your development environment before those changes will appear in workflows triggered via the API. [Learn more →](/concepts/commits) ## Environments Knock uses the concept of environments to ensure logical separation of your data and configuration. This means that users and preferences created in one environment are **never** accessible to another. Environments usually map to the environments you have in your software development life cycle (SDLC). [Learn more →](/concepts/environments) ## Recipients A Recipient within Knock is any [User](#users) or [Object](#objects) that may wish to receive notifications. [Learn more →](/concepts/recipients) ## Users A user in Knock represents an individual who should receive a message. A user's profile information contains important attributes about the user that will be used in messages (name, email). The user object can contain other key-value pairs that can be used to further personalize your messages. [Learn more →](/concepts/users) ## Preferences Preferences enable your users to opt-out of the notifications you send using Knock. [Learn more →](/concepts/preferences) ## Objects An object represents a resource in your system that you want to map into Knock. Objects are a powerful and flexible way to ensure Knock always has the most up-to-date information required to send your notifications. They also enable you to send notifications to non-user recipients. You can use objects to: - send in-app notifications to non-user resources in your product (the activity feed you see on a Notion page is a good example) - send out-of-app notifications to non-user recipients (such as a Slack channels) - reference mutable data in your notification templates (such as when a user edits a comment before a notification is sent) [Learn more →](/concepts/objects) ## Subscriptions A subscription represents a relationship between a non-user entity (an Object) and a Recipient (the subscriber). Subscriptions are used to model pub/sub behavior and lists of recipients that Knock will automatically fan out a workflow trigger to on your behalf. [Learn more →](/concepts/subscriptions) ## Schedules A schedule allows you to automatically trigger a workflow at a given time for one or more recipients. You can think of a schedule as a managed, recipient-timezone-aware cron job that Knock will run on your behalf. [Read more →](/concepts/schedules) ## Tenants Tenants represent segments your users belong to. You might call these "accounts," "organizations," "workspaces," or similar. This is a common pattern in many SaaS applications: users have a single login joined to multiple tenants to represent their membership within each. Within Knock you can model your tenant objects as first-class entities and use them to scope features. [Learn more →](/concepts/tenants) ## Messages A message in Knock represents a notification delivered to a recipient on a particular channel. Messages contain information about the request that triggered its delivery, a view of the data sent to the recipient, and a timeline of its lifecycle events. [Learn more →](/concepts/messages) ## Translations Translations support localization in Knock. They hold the translated content for a given locale, which you can reference in your message templates with the `t` Liquid function filter. [Learn more →](/concepts/translations) ## Conditions Knock uses conditions to model checks that determine variations in your workflow runs. They provide a powerful way to create more advanced notification logic flows. [Learn more →](/concepts/conditions) ## Variables Variables within Knock let you set shared constants or secrets that you can use in all of the workflows and templates under your account. Variables can be overridden at the environment level to set per environment constants. [Learn more →](/concepts/variables) ## Audiences Audiences are user segments that you can notify. You can bring audiences into Knock programmatically with our API or a supported reverse-ETL source. [Learn more →](/concepts/audiences) ## Workflows Learn more about what a workflow in Knock is, and how to think about grouping together your cross-channel notifications into different workflows. --- title: Workflows description: Learn more about what a workflow in Knock is, and how to think about grouping together your cross-channel notifications into different workflows. tags: ["categories", "archive", "archived"] section: Concepts --- In Knock, all notifications are sent via a workflow. Each workflow acts as a container for the logic and templates that are associated with a kind of notification in your system. Workflows are represented as a set of steps, which are either function or channel steps. Functions apply logic to your workflow run, like batching to collapse multiple calls into single notifications or delays to pause the execution of a workflow for some duration. Channel steps produce a notification that will be delivered via a [configured channel](/concepts/channels). All steps can also have conditions to determine if and when they should run. Workflows in Knock: - Always have a unique `key` associated - Are always executed for a single recipient at a time - Contain all of the logic and templates for the notifications you send - Can have recipient preferences attached - Can be triggered via the API, an event, or on a schedule for a recipient ## Thinking in workflows A workflow groups together cross-channel notifications and the business logic that governs those notifications into a single entity. Workflows are always executed on behalf of a single recipient and can have other properties associated with them, like the "actor" who performed the action that triggered the notification. It's highly recommended to group notifications about the same "topic" or "entity" in your system into individual workflows. While it might be tempting to build a single workflow with conditional logic for all of your notification use cases that can be triggered from anywhere within your application with the same workflow `key`, modularizing your workflows by topic and use case allows you to offer the highest level of configurability to your users via [Preferences](/concepts/preferences). Our customers also find that concise, topic-specific workflows are easier to maintain and iterate on. As an example, if we're building a document collaboration app where users can comment on specific documents, we might group all of the logic about the cross-channel comment notifications we have into a single `new-comment` workflow. Note: remember that in Knock all notifications sent are via a workflow. There's no other way to send notifications to your recipients, so every notification you want to send must be represented in a workflow. } /> You can read more about how to build your workflows and the features available within the workflow builder under the [designing workflows section of the documentation](/designing-workflows). ## Workflows and notification templates Each workflow you build will contain one or more [channel steps](/designing-workflows/channel-step). It's these channel steps that contain the templates that will be rendered to produce a notification sent to the recipient of the workflow run. The templates associated with a channel step **only** exist in the context of that channel step. That means that templates cannot currently be shared across workflows, or even across other channel steps within the same workflow. ## Managing workflows Knock workflows can be managed either via the Knock Dashboard or programmatically via the [Management API](/mapi). The [Knock CLI](/cli) offers a convenient way to work with the management API locally to make updates to workflows and their templates. Note: remember that workflows and other resources in Knock can only ever be edited in the development environment.{" "} Learn more about versioning and{" "} environments. } /> ### Workflow categories Each workflow can have one or more categories associated with it. Categories are useful for grouping related types of workflows together and offer a way to apply a user's preferences across many workflows. To set a `category` for a given workflow, go to that workflow's page in the dashboard, click the "..." menu, and select "Manage workflow." From there, you'll be able to add categories. Note: workflow categories are{" "} case sensitive. } /> ### Version control for workflows All changes to workflows, including changes made to the templates inside of a workflow, are version controlled. Changes must be made in the development environment and are then "committed" and then "promoted" between environments for that version to be live within an environment. This allows you to confidently make changes to workflows, without affecting any running in production. Read more about [environments](/concepts/environments) and [versioning](/concepts/commits) in Knock. ### Workflow status Each workflow has an `Active`/`Inactive` status that is displayed in your dashboard's **Workflows** section. The status defaults to `Active` and can be set by clicking on the workflow and using the "Status" selector. This is your kill switch for a given workflow should you need it; any attempt to trigger an `Inactive` workflow will result in a `workflow_inactive` [error](/reference#error-codes). The status setting operates independently from the commit model so that you can immediately enable or disable a workflow in any environment without needing to go through environment promotion. **It is environment-specific and will only be applied to the current environment.** ### Archiving workflows Archiving a workflow allows you to permanently remove a workflow from Knock. When you archive a workflow it will be removed from **all environments** and cannot be called via API. Once a workflow is archived, it **cannot be undone**. If you have delayed runs for a workflow that is archived, when the workflow run resumes after the delay it will immediately terminate. ## Running workflows Workflows defined in Knock are executed via trigger, which starts a workflow run for the recipients specified using the `data` passed to the workflow trigger. Note: it's important to know that in Knock a workflow run is always executed against a single recipient. Workflows can always be invoked for multiple recipients, but each run will only be for a single recipient. } /> ### Triggering a workflow In Knock, workflows can be triggered in three different ways: - **API call**: workflows can be [triggered directly via an API call](/send-notifications/triggering-workflows) to our workflow trigger endpoint. This is the most common form of integration and means that Knock is integrated into your backend codebase, usually alongside your application logic. - **Events**: using different [event sources](/integrations/sources/overview/), you can connect Knock to CDPs such as Segment and Rudderstack and map the events those systems produce to workflows that should be triggered. - **Schedules**: [workflows can be scheduled](/concepts/schedules) to be run for one or more recipients, in a recipient's local timezone on a one-off, or recurring basis. ### Canceling a workflow run Any triggered workflow that has an active delay or batch step can also be canceled to halt the execution of that workflow run. Workflow cancellations today must happen through the cancellation API and can only occur when a `cancellation_key` has been specified on the workflow trigger. [Read more about canceling workflows](/send-notifications/canceling-workflows) ### Workflow runs and recipients When a workflow is triggered via the API we return a `workflow_run_id` via the API response. This ID represents the workflow run for all of the recipients that the workflow was triggered against. For each recipient included in the workflow trigger or that the workflow should fan out to [via subscriptions](/concepts/subscriptions), a new workflow run is enqueued. We call this the recipient workflow run. Recipient runs are visible within the Knock dashboard by going to **Developers** > **Logs**. Each run can be inspected to view its current state as well as the steps executed for the workflow. It's also possible from a workflow run log to see the messages (notifications) produced by the run. ### Workflow run scope When a workflow run is executed, associated state is loaded to be used within the templates and conditions defined in the workflow. This state is known as the workflow run scope. The run scope can be modified during the duration of the workflow run by fetching additional data via the [fetch function](/designing-workflows/fetch-function). [Read more about the properties available](/designing-workflows/template-editor/variables) ## Automate workflow management with the Knock CLI In addition to working with workflows in the Knock dashboard, you can programmatically create and update workflows using the [Knock CLI](/developer-tools/knock-cli) or our [Management API](/developer-tools/management-api). If you manage your own workflow files within your application, you can automate the creation and management of Knock workflows so that they always reflect the state of the workflow files you keep in your application code. The Knock CLI can also be used to commit changes and promote them to production, which means you can automate Knock workflow management as [part of your CI/CD workflow](/developer-tools/integrating-into-cicd). ### Workflow files structure When workflows are pulled from Knock, they are stored in directories named by their workflow key. In addition to a `workflow.json` file that describes all of a given workflow's steps, each workflow directory also contains individual folders for each of the [channel steps](/designing-workflows/channel-step) in the workflow that hold additional content and formatting data. ```txt title="Local workflow files structure" workflows/ └── my-workflow/ ├── email_1/ │ ├── visual_blocks/ │ │ └── 1.content.md │ └── visual_blocks.json ├── in_app_feed_1/ │ └── markdown_body.md └── workflow.json ``` If you're migrating your local workflow files into Knock, you can arrange them using the example file structure above and then push them into Knock with a single command using [`knock workflow push --all`](/cli#workflow-push). Each `workflow.json` file should follow the structure defined [here](/mapi#workflows-object). You can learn more about automating workflow management in the [Knock CLI reference](/cli). Feel free to contact us if you have questions. ## Frequently asked questions No, there's no limit to the number of workflows you can have within your Knock environment. While it's possible to create per-customer workflows using the management API, we recommend avoiding doing this in favor of using [per-tenant overrides](/concepts/tenants#custom-branding) and [preferences](/concepts/preferences) to control individual workflows. Yes, you can set a workflow's [status](/concepts/workflows#workflow-status) to `Inactive` to disable it. Any in-progress workflow runs will be immediately terminated. ## Channels Learn about what a channel is in Knock and how you can use channels to power your cross-channel notification delivery. --- title: Channels description: Learn about what a channel is in Knock and how you can use channels to power your cross-channel notification delivery. tags: [] section: Concepts --- A channel in Knock represents a configured provider to send notifications to your recipients. Most providers within Knock use credentials that you supply to deliver notifications on your behalf. These credentials and other settings are what make a configured channel. Within Knock, we split channels into different types, where each type has at least one provider associated that can be configured: - [Email](/integrations/email/overview) (such as Sendgrid, Postmark) - [In-app](/integrations/in-app/overview) (such as feeds, toasts, banners) - [Push](/integrations/push/overview) (such as APNs, FCM) - [SMS](/integrations/sms/overview) (such as Twilio, Telnyx) - [Chat](/integrations/chat/overview) (such as Slack, Microsoft Teams, and Discord) - [Webhook](/integrations/webhook/overview) (send webhooks to custom channels or enable your own customers to configure webhooks in your product) You can read more about the various types of [channel integrations available here](/integrations/overview). ## Managing channels You can create and manage channels within Knock from the dashboard under the **Integrations** > **Channels** section. A created channel exists across all environments in your Knock account and uses the same ID for each environment. **Please note**: only admins and owners on an account can manage channels. ## Channel settings For each channel you create in the Knock dashboard, you will need to configure the channel per environment for it to be valid. Each provider requires different configuration data, and you can see the required settings in the [integrations guide](/integrations/overview). Given that channel configuration is **per-environment** this makes it possible to have separate settings for your testing/sandbox environments vs your production environments. Channel settings can easily be cloned across environments when needed. Note: unlike other types of configuration in Knock, channel settings are never versioned, meaning when they are saved the configuration is synchronized to the Knock configuration store to immediately take effect. } /> ## Using channels to send notifications In Knock, all notification messages are sent via a channel step configured within a [workflow](/concepts/workflows). Notifications to be delivered will be forwarded to the channel using the settings that you provide inside our message delivery service, which handles the communication to the underlying provider and the retry logic if a message delivery should fail. For most providers, you can inspect the delivery logs produced when trying to send a message from under the **Messages** > **Logs** page from within the Knock dashboard. ## Setting additional, per-recipient data for a channel Some providers may require additional, per-recipient data to send notifications. A good example of this is a push provider like [APNs](/integrations/push/apns), which requires a unique, device-specific token to know how to route a push notification to the recipient. In Knock, we refer to this concept as "Channel Data" as it represents the data that exists for a recipient on a particular channel. You can read more about [setting channel data here](/managing-recipients/setting-channel-data). You can also see channel data requirements in the documentation for each provider. ## Frequently asked questions There's no restriction on how many different channels you can have, including multiple channels for the same provider. ## Commits Learn about how Knock's commit and promotion model works. --- title: Commits description: Learn about how Knock's commit and promotion model works. tags: [ "branches", "env", "version control", "versions", "commit", "promote", "promotion", "revert", "rollback", "staging", "active", "inactive", "diffs", "push", ] section: Concepts --- To version the changes you make in your [environments](/concepts/environments), Knock uses a commit model. When you make a change to a workflow or a layout in the Knock dashboard, you'll need to commit it to your development environment before those changes will appear in workflows triggered via the API. After you modify a resource, you'll see a "Save" button that allows you to store those changes. When you're ready to permanently store your updates with version control, they should be committed with the "Commit to development" button that will come into focus after changes have been saved. **A few things to note:** - Channel configurations, branding, and variables do not need to be committed, as they live at the account-level. This means that if you make a change to a channel configuration, it will update immediately on notifications sent in that environment. - Any changes you have saved but not yet committed **will** apply when you're using the test runner. This allows you to test your latest changes before you commit them to your development environment. - You can work with Knock resources outside of your dashboard if you prefer. We offer both a [Management API](/developer-tools/management-api) and a [command line interface](/developer-tools/knock-cli) for interacting with Knock resources programmatically. The commit model applies to all methods of interacting with Knock resources, whether directly in the dashboard or with the Management API or CLI. ## Visualizing changes between commits Clicking the "Commit to development" button will show you a view of changes between your current commit and the most recent version of the resource that you're updating. Commit diffs are also available on your full commit log (viewable on the "Commits" page in your dashboard), so you can view the commit history for a resource and know exactly what was changed with each commit. ![Commit diffs in Knocks version control](/images/commit-diff-showcase.gif) ## Promoting commits Knock is designed to allow large teams to create and manage notifications at scale. That means that changes must be versioned, tested, and promoted to production environments, so that if there are any issues they can be rolled back with ease. Knock uses a model where all changes to the production environment must be **promoted** and cannot be made directly. Changes must be made in the development environment, then staged and tested before being rolled out (similar to a git-based workflow). Note: There is one exception to the commit and promote rule — the active/inactive{" "} status on a workflow lives independently from the commit model so that you can immediately enable or disable a workflow in any environment without needing to go through environment promotion. A workflow's status is environment-specific and will only be applied to the current environment. } /> To promote a committed change to a higher environment, navigate to the "Commits" page in your Knock dashboard and click on "Unpromoted changes." Here you'll see a list of commits that are ready for promotion. Clicking "View commit" on a given commit will show you a commit diff for that change, and clicking the "Promote to [environment]" button will promote the staged commit to the next-higher environment (whose name is displayed on the button). **A typical deployment lifecycle in Knock looks like:** 1. Introduce any backend changes to support a new workflow (users and preference properties) 2. Build the workflow in a dev environment in Knock and commit it to that environment 3. Test the workflow 4. When you're ready to go live, promote the workflow to production ## Reverting a commit If you've made a change in a commit that you want to revert, you can use the "Revert commit" feature to "undo" that change. You can find the revert commit action on the "Commits" page in the dashboard, under the "Unpromoted changes" and "Commit log" tabs. **Note**: you can only revert a commit in the development environment. If you need to revert a change to a higher environment, you must first revert it in development and then promote the revert commit. **Reverting a commit will**: - Create a new commit with a message that indicates the commit reverts a preceding commit - Wind back the state of the resource to the change that precedes the commit - Undo any uncommitted changes on the resource Because the revert will produce a new commit, you can then promote that commit to other environments to make that change live in those environments. ## Environments Learn about how Knock's isolated environment model works and how it fits into your system development lifecycle. --- title: Environments description: Learn about how Knock's isolated environment model works and how it fits into your system development lifecycle. tags: ["env", "version control", "variables", "promote", "promotion", "staging"] section: Concepts --- Knock uses the concept of environments to ensure logical separation of your data between local, staging, and production environments. This means that recipients and preferences created in one environment are **never** accessible to another. The API key you use determines the environment into which you'll be sending data. You can find your environment-specific API keys under the "Developer" section of the Knock dashboard. ## Working with Knock resources across environments In order to prevent unintended changes to Knock resources (like a [workflow](/concepts/workflows) or [layout](/integrations/email/layouts)) in a production setting, we use a commit model that requires changes to be saved and committed in your Development environment and [promoted](/concepts/commits#promotion-and-rollback) to higher environments. [Read more about Commits](/concepts/commits) ## Create additional environments By default your Knock account comes with two environments: Development and Production. If you need an additional environment in Knock to mirror your own development lifecycle (for example, a Staging environment) you can add it on the settings page of the Knock dashboard. To create a new environment, go to **Settings** > **Environments**. You'll see a button to "Create environment." When you create an additional environment, it will be inserted between Development and Production. This means all changes will continue to be introduced in your Development environment and will need to be promoted through additional environments until they land in Production. Subsequent new environments will always be added one "level" lower than Production; environments cannot be re-ordered, as this would break the promotion model for previously-promoted changes. ## Environment-based access controls We recognize the importance of protecting your sensitive data, so we designed Knock from the ground-up with privacy and security in mind. There are two tools you can use to control access to your data in the Knock dashboard: - [Roles and permissions.](/manage-your-account/roles-and-permissions) Knock offers granular roles for the different functions your team members may want to carry out in Knock, such as support team members that need to debug issues for customers but shouldn't be making changes to notification logic. - [Customer data obfuscation.](/manage-your-account/data-obfuscation) You can use our per-environment data obfuscation controls to configure whether you want your team members to be able to view customer data in the Knock dashboard. ## Recipients A Recipient in Knock represents a person or a non-user entity that receives notifications. --- title: Recipients description: A Recipient in Knock represents a person or a non-user entity that receives notifications. tags: [ "RecipientIdentifier", "recipient", "user", "timezone", "time zone", "locale", ] section: Concepts --- A Recipient within Knock is any [User](/concepts/users) or [Object](/concepts/objects) that may wish to receive notifications. Knock persists information about recipients to send those recipients notifications and give a single source of truth for the notifications sent for debugging and logging purposes. Recipients have: - **Identifiers.** A string from your system that uniquely represents the recipient. - **Properties.** Structured and unstructured data for the recipient, including but not limited to the name, email, and phone number. - **Preferences.** The rules under which the recipient should or should not receive notifications. - **Channel data.** Channel data send a recipient a notification on a particular channel, such as tokens for sending push notifications to a given channel or access tokens to send notifications to a chat channel like Slack. ## `RecipientIdentifier` definition A recipient identifier can be one of: - A string user ID for a previously identified user (`user-1`) - An object reference dictionary (`{ "collection": "my-collection", "id": "object-1" }`) for a previously identified object - A dictionary containing a recipient to be [identified inline](/managing-recipients/identifying-recipients#inline-identifying-recipients) This can be expressed as the following type: ```typescript type RecipientIdentifier = | string | { collection: string; id: string } | Record; ``` ## Identifying recipients For Knock to be able to send notifications to your recipients, you must first identify those recipients to synchronize them with Knock. We call this process "identification", and it can be done ahead of time, or lazily via inline-identification in your workflow triggers. Identifying sets the properties associated with your recipients into Knock, so that you can reference those properties in the notifications you send out. [Read more about identifying your recipients ->](/managing-recipients/identifying-recipients) ## Custom properties Recipients in Knock can have any number of custom properties set on them, which you set during the identification process. Some properties, like `email` or `phone_number` are required for notifications to be delivered to the recipient. ## Managing lists of recipients You can use our [Subscriptions](/concepts/subscriptions) feature to create a Knock-managed list of recipients that should be notified. Subscriptions are useful for modeling pub/sub behavior. ## Recipient timezones A recipient can have an optional `timezone` property, which should be a [valid tz database time zone string](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), like `America/New_York` or `Europe/London`. By default, if no recipient timezone is set `Etc/UTC` will be used however a [default timezone](/manage-your-account/account-timezone) can be specified at the account level under "Settings" which will override this default for all recipients. ## Frequently asked questions No, there's no limit on the number of recipients you can have within Knock. We support non-user entities (Objects) receiving notifications in Knock because some notifications are delivered to non-user entities. For example, a Slack notification that sends to a channel. That notification is not delivered to a user but to an entity that connects Slack and your system (such as a Project or a Team). Currently, there's no limit to the size of the properties you can add to a recipient. We reserve the right to impose a limit here in the future, however. ## Users Learn more about Users in Knock and see code examples to get started. --- title: Users description: Learn more about Users in Knock and see code examples to get started. tags: ["recipients", "identify", "actor"] section: Concepts --- A [User](/reference#users) represents a person who may need to be notified of some action occurring in your product. A user is a type of recipient within Knock and is the most common type of entity that you may wish to send a notification to. ## Sending user data to Knock User data must be synchronized to Knock to send the user a notification or to reference that user in a notification. We refer to this process as identifying users. [Read more about identifying users ->](/managing-recipients/identifying-recipients). ## Guidelines for use ### User identifiers The identifier for a user is important as it's the unique key that we will use to merge users and determine recipients for a notification. Generally, the best practice here is to use your internal identifier for your users as the `id`. Please note: The maximum number of characters for the identifier is 256, and it cannot contain a "/" or "#". You cannot change a user's id once it has been set, so we recommend that you use a non-transient `id` like a primary key rather than a phone number or email address. } /> ### Required attributes The following attributes are required for each user you identify with Knock. | Property | Description | | -------- | -------------------------------------------------------------- | | id | An identifier for this user from your system, should be unique | ### Optional attributes The following attributes are optional, depending on the channel types you decide to use with Knock. | Property | Description | | ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | email | The primary email address for the user (required for email channels) | | name | The full name of the user | | avatar | A URL for the avatar of the user | | phone_number | The [E.164](https://www.twilio.com/docs/glossary/what-e164) phone number of the user (required for SMS channels) | | timezone | A valid [tz database time zone string](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) (optional for [recurring schedules](/concepts/schedules#scheduling-workflows-with-recurring-schedules-for-recipients)) | ### Storing user properties In addition to the system attributes defined on the user schema above, Knock will keep track of any `properties` (key/value pairs) that you send to us. These _traits_ are always merged onto a user and returned to you. Traits are useful for when you need to perform additional personalization on a user, like denormalizing the current plan they're on so you can use this to determine the portion of a notification they should receive. You can nest the properties you send as deeply as needed, and Knock will perform a deep merge with these properties on each subsequent upsert. Note that this means that existing properties cannot be explicitly removed, but you can overwrite them with a `null` value. ### The user object Once sent to Knock, the user object returned to you in the Knock payload looks like this: ```json title="User object" { "id": "user_1234567890", "name": "Dummy User", "email": "dummy@example.com", "updated_at": "2021-03-07T12:00:00.000Z", "created_at": null, "__typename": "User" } ``` | Property | Description | | ------------ | ------------------------------------------------------------------ | | id | The unique user identifier | | properties\* | Traits sent for the user are merged back onto the main user object | | created_at | The created at time (provided by you) | | updated_at | The last time we updated the user | \* All properties appear at the top level of the user object. ## Retrieving users Users can be retrieved from Knock to see the current state of their properties using the `users.get` method. ## Deleting users Users can be deleted from Knock via the `users.delete` method. Deleting a user from Knock will have the following effect: - The user will no longer be able to be a recipient or an actor in a workflow - The user will no longer appear in the dashboard under the "Users" list - Any in-app messages that reference the user will be replaced by a "missing user" marker ## Frequently asked questions Commonly you'll want to send notifications to entities in your system that are not currently registered users in your product (think guests or invited users). In these situations, we recommend: 1. Identifying the user with a unique identifier, such as their email address, or with a prefix (`guest_`) to denote the different type. 2. Where possible, if the notified user becomes a registered user in your system then using our [merge API](#merging-users) to merge the guest user and the registered user to preserve message sending history. It might feel counterintuitive to store registered users and non-registered users under a single collection in your Knock environment, but Knock should always be viewed as a _cache_ of information about users and entities that may need to be notified in your system. Yes, they are. Each environment has a separate, isolated set of users. If you need to share users across environments, you must re-identify them in each environment. If you want to store and notify different types of users within your Knock environment, we recommend prefixing the id with the type. So if you had two distinct user types, `owners` and `customers` you can pass Knock ids like `customer_123` and `owner_456`. If you need to send a notification to an entity in your system you should have a look at modeling those as [Objects](/concepts/objects). Objects can represent **any non-user entity**. When you add new team members in the Knock dashboard, we automatically add them as "Users" within your Knock Development environment so you can send them notifications. We do this to help you with testing. No, all users who are sent a notification are identified in your Knock environment and are persisted. If you have a use case here that you wish to discuss with us please [get in touch](mailto:support@knock.app). If you need to edit or update a user's attributes in Knock, you can either use the [identify a user endpoint](https://docs.knock.app/reference#identify-user) or [inline identification](https://docs.knock.app/managing-recipients/identifying-recipients#inline-identifying-recipients) when triggering a workflow. ## Preferences Learn how the notification preference system works in Knock. --- title: "Preferences" description: "Learn how the notification preference system works in Knock." tags: [ "recipients", "conditions", "prefs", "preferences", "users", "user preferences", ] section: Concepts --- [Preferences](/reference#preferences) enable your users to opt-out of the notifications you send using Knock. ## How preferences work A user has a `PreferenceSet`. A `PreferenceSet` is a JSON object that tells Knock which channels, categories, and/or workflows a user has opted out of receiving. When Knock runs a workflow for a user, we evaluate their `PreferenceSet`. A message will not send if the user has opted out of receiving it. With Knock preferences you can power standard preference use cases, such as the topic-channel preferences grid picture below, as well as advanced use cases such as per-workflow preferences, send time preferences, and more. To learn more about how to build your preference center with Knock, how to set preferences for your users, and advanced concepts like per-tenant preferences, object preferences, and preference conditions, go to our [preferences overview](/preferences/overview). An image of a preference set ## Learn more To learn more about how to build your preference center with Knock, how to set preferences for your users, and advanced concepts like per-tenant preferences, object preferences, and preference conditions, go to our [preferences overview](/preferences/overview). ## Objects Learn the basics of Objects in Knock. --- title: Objects description: Learn the basics of Objects in Knock. tags: ["recipients", "identify"] section: Concepts --- An [Object](/reference#objects) represents a resource in your system that you want to map into Knock. In this guide we'll walk through how to use objects for a few different use cases in Knock. We'll start with an overview of objects and how to use them, then we'll walk through two common use cases for objects: Slack channel notifications and handling mutable data on long-running notifications (such as digests). **Note:** Objects are an advanced feature within Knock. You can send multi-channel notifications across all channel types (except Slack) without touching the Objects API. If you're just getting started, we'd recommend coming back to objects when you've already started to leverage a few channels using Knock. ## An overview of objects Objects are a powerful and flexible way to ensure Knock always has the most up-to-date information required to send your notifications. They also enable you to send notifications to non-user recipients. You can use objects to: - Send out-of-app notifications to non-user recipients (such as a [Slack channel](#slack-channel-notifications)). - [Reference mutable data in your notification templates](/designing-workflows/template-editor/referencing-data) (such as when a user edits a comment before a notification is sent). Knock roadmap alert. We have Objects API support for in-app feed notifications on our roadmap.

If you have a use case for this functionality, please send a note to support@knock.app or use the feedback button at the top of this page to let us know. } /> ## Sending object data to Knock All objects belong to a `collection`, which groups objects of the same type together. An object should be unique within a collection, identified by the `id` given. We use the `{collection, id}` pair to know when to create or update an object. Objects follow the same rules as all other items in Knock in that they are unique and logically separated per Knock environment. The way you manage object data in Knock is largely the same as [how you manage your user data](/concepts/users#sending-user-data-to-knock). As with users, we support three approaches for managing Knock objects: individual, bulk, and inline. You can use the set object API to send us data for a single object. [API reference →](/reference#set-object) You can use the bulk set objects API to send us data for many objects at once. This endpoint allows you to identify up to 1000 objects at a time. [API reference →](/reference#bulk-set-objects) You can also integrate object management into your workflow trigger calls. If you include additional object metadata (other than `id` and `collection`) in a workflow trigger call, Knock will perform an asynchronous action to upsert these objects as part of processing the workflow. [API reference →](/reference#trigger-workflow-inline-identify) ## Guidelines for use ### Collection naming Use plural collection names when possible. The collection name should describe the group of one or many objects within the collection. Good examples of collection names are `projects`, `teams`, `accounts`. ### The object identifier The object `id` should be unique within the collection. It should also be a stable identifier, likely the primary key of the object in your system so it can be easily referenced later. Please note: object ids **cannot be changed once set**. ### Properties Objects can contain any number of key-value property pairs that you can then reference in templates and trigger conditions. Properties will always be deeply merged between upserts, meaning that existing properties (including nested properties) will be updated with the newly provided values. Note that this means that existing properties cannot be explicitly removed, but you can overwrite them with a `null` value. ## Object subscribers You can use [subscriptions](/concepts/subscriptions) to subscribe [recipients](/concepts/recipients) to objects as subscribers. When an object is passed to a workflow trigger, Knock will automatically fan out and run a workflow for every subscriber on that object. Nested object hierarchies. One of the most powerful things about object subscriptions is that they can contain other objects.

As an example, a workspace object may have a list of projects as subscribers, each of which has a list of project follower subscribers.

When you trigger a workflow with that workspace as a recipient, Knock will fan out through the hierarchical relationship you've created and notify all projects and{" "} project followers under that workspace. } /> ## Referencing object data in templates You can reference object data in templates using the `object` filter to load object data into a template. You can reference an object by a static identifier, or by a dynamic identifier passed in via data in your workflow trigger. For example, if we have a `projects` collection that contains an object under the identifier `proj_1`, we can load that object into a template via a static identifier like this: ```liquid title="Referencing an object by a static identifier" {% assign project = "proj_1" | object: "projects" %} ``` Or, we can load an object by a dynamic identifier. For example, if we have a workflow trigger that contains a `project_id` property, we can load that object into a template like this: ```liquid title="Referencing an object by a dynamic identifier" {% assign project = data.project_id | object: "projects" %} ``` Once an object is loaded into a template, you can reference any of the properties of that object using the dot notation. You can read more about referencing data in templates in our [guide on referencing data in templates](/designing-workflows/template-editor/referencing-data). ## Examples ### Slack channel notifications A common notification use case we see in SaaS applications is the ability for users to connect a object in the product they're using to a channel in their own Slack workspace. That way when something happens in that object (e.g. a comment is left) they receive a notification about it in their connected Slack channel. Let's take a fictional example here where we have an audio collaboration service that allows its customers to connect a Project object to a Slack channel. Once the Project and Slack channel are connected, all Comments left within the Project will result in notifications sent to the customer's Slack channel. Here's how we'd use Knock objects to solve this. 1. **Register our Project object to Knock** Typically whenever the project is created or updated we'll want to send it through to Knock. 2. **Store the Slack connection information for the Project** Once our customer chooses to connect their Slack channel to the Project, we have a callback that then adds the Slack information as Channel Data. 3. **Add Slack as a step to our workflow** Inside of the Knock dashboard, we're going to add a new Slack step to our `new-comment` workflow that will send a notification displaying the comment that was left in our product. 4. **Send the Project as a recipient in your workflow trigger** Now when we trigger our `new-comment` workflow, we also want to add our Project object as a recipient so that the newly added Slack step will be triggered. Knock then executes the workflow for this Project object as it would for any user recipients sent in the workflow trigger, skipping over any steps that aren't relevant. (In this case, the Project object only has one piece of channel data mapped to it—the Slack channel—so it won't trigger notifications for any other channel steps in our `new-comment` workflow.) When the Slack step is reached, the connection information we stored earlier will be used as a means to know which channel to send a message to and how to authenticate to that channel. ## Subscriptions Learn how to use subscriptions to notify a list of recipients associated with an object in your data model. --- title: Subscriptions description: Learn how to use subscriptions to notify a list of recipients associated with an object in your data model. tags: ["subscriptions", "publish subscribe", "pub/sub", "lists", "alerts", "topics"] section: Concepts --- Subscriptions are an extension to [Objects](/concepts/objects) and express the relationship between a [Recipient](/concepts/recipients) (the subscriber) and an Object. You can use subscriptions for: - Creating notifications for a large number of recipients (e.g. all users of your product) - Alerting use cases, where users can opt into and out of an alert - Publish/subscribe models where you want to fan out to a set of users subscribed to a topic Any Object within Knock can be subscribed to by one or more recipients, and the entire set of subscribers can be notified by triggering a workflow for the object, without you needing to keep the relationship data within your system of who is subscribed to what. ## How subscriptions work 1. Identify an object in a collection that represents the topic, or entity you wish to subscribe recipients to 2. Subscribe one or more recipients to the object by creating a subscription between the recipient and the object 3. Trigger a workflow for the object On step #3, Knock will handle the fan out of the workflow trigger **to all recipients that are subscribers**, automatically enqueuing a workflow run for the recipient on your behalf. ## Integrating subscriptions Note: for all of the examples below you will need to have an [object identified within Knock](/concepts/objects#sending-object-data-to-knock). In our examples below, we create an object under a `project_alerts` collection with an id `project-1`. [Go to API documentation →](/reference#subscriptions) ### Subscribing recipients to an object Subscribing a recipient to an object creates an `ObjectSubscription` entity describing the relationship between the `Recipient` and the `Object`. You can subscribe up to 100 recipients to an object at a time by passing one or more `RecipientIdentifiers`. There is no limit to the number of recipients you can subscribe to an object. ```javascript title="Subscribing multiple recipients to an object" await knock.objects.addSubscriptions("project_alerts", "project-1", { recipients: ["esattler", "dnedry"], properties: { // Optionally set other properties on the subscription for each recipient }, }); ``` Similar to workflow triggers, you can inline identify recipients while subscribing them to an object. ```javascript title="Identifying users while subscribing them to an object" await knock.objects.addSubscriptions("project_alerts", "project-1", { recipients: [ { id: "esattler", name: "Ellie Sattler", email: "esattler@ingen.net", }, { id: "dnedry", name: "Dennis Nedry", email: "dnedry@ingen.net", }, ], properties: { // Optionally set other properties on the subscription for each recipient }, }); ``` ### Unsubscribing recipients from an object To remove one or more recipients (up to 100) from an object, you can pass a list of recipient identifiers. ```javascript title="Delete subscriptions for provided recipients" await knock.objects.deleteSubscriptions("project_alerts", "project-1", { recipients: ["esattler", "dnedry"], }); ``` ### Triggering a workflow for all subscribers of an object By default when you trigger a workflow for an object that has subscriptions attached Knock will fan out to all subscribers and enqueue a new workflow run for that recipient, with the information passed into the workflow trigger. ```javascript title="Triggering a workflow for all subscribers of an object" await knock.workflows.trigger("alert-workflow", { recipients: [{ collection: "project_alerts", id: "project-1" }], data: { // Data to be passed to all workflow runs }, }); ``` ### Retrieving subscriptions for an object You can retrieve a paginated list of subscriptions for an object, which will return the `recipient` subscribed as well. ```javascript title="Retrieving a paginated list of subscriptions for an object" const { entries, page_info: pageInfo } = await knock.objects.listSubscriptions( "project_alerts", "project-1", { after: null }, ); ``` ### Retrieving subscriptions for a user You can retrieve a paginated list of active subscriptions for a user, which will return the `object` that the user is subscribed to as well. ```javascript title="Retrieving a paginated list of subscriptions for a user" const { entries, page_info: pageInfo } = await knock.users.getSubscriptions( "user-1", { after: null }, ); ``` ## Accessing subscription properties in a workflow run When triggering a workflow for a recipient from a subscription, the `properties` defined on the subscription are made available for use within the workflow run scope. You can access the properties under the `recipient.subscription` namespace. As an example, if you have a property `role` under your subscription properties, you can access it as `recipient.subscription.role` in the workflow run scope. Note: If you're looking to reference the parent object that the recipient is subscribed to, you can side-load the parent object [using the `object` filter in liquid](/designing-workflows/template-editor/referencing-data). ## Modeling nested subscription hierarchies It's possible to model nested subscription hierarchies by associating child objects as subscribers of a parent object. This allows you to create structures like "organizations" having many "teams" which have many "team members" (users). ```javascript title="Adding child objects as subscribers of a parent object" await knock.objects.addSubscriptions("organizations", "org-1", { recipients: [ { collection: "teams", id: "team-1", name: "Org 1, Team 1" }, { collection: "teams", id: "team-2", name: "Org 1, Team 2" }, ], }); ``` Once you've established a nested hierarchy like the above, it's also possible to notify **all child subscribers** from a parent object. In the example above, that means we could notify all team members of an organization by setting the recipient of the trigger to be the organization. Note: currently we only support subscriptions at a maximum depth of 2, meaning you can model a hierarchy such as{" "} {"parent -> child -> user"} but no deeper. If you need to support a deeper nesting, please{" "} get in touch. } /> ## Deduplication by default Knock always deduplicates recipients when executing a notification fan out, including for workflow triggers with subscriptions. Knock will ensure your notification workflow is executed only once for each unique recipient in the following cases: - When the recipient appears both in the initial trigger and as a subscriber to one of your objects. - When the recipient appears multiple times within a nested subscription hierarchy. ## Frequently asked questions There's no upper bound in the number of subscribers you can have against a recipient, although you can only **manage** 100 recipients on an object at a time using our API. Yes! An object with subscribers _can also_ be subscribed to a parent object, allowing you to create nested hierarchies of objects (like a Team has many Projects, and each Project has many Members). Right now, you can only **view** the subscribers of an object in the dashboard. You can do so under **Objects** > **Subscriptions**. Yes, you can pass a set of `properties`, which is a set of unstructured key-value pairs that you set any arbitrary data about. Right now the answer is no, but we're interested in hearing about your use case here as we're considering adding this functionality in the future. Yes, you can. Once you trigger a workflow for an object that has subscribers attached, you will see a workflow run for each of the subscribers under the "Workflow runs" page. Yes, by default when you trigger a workflow for an object that has subscriptions attached Knock will generate a workflow run for the object itself AND all of the attached subscribers. No, currently Knock [deduplicates all recipients](#deduplication-by-default) when fanning out to object subscribers. If this is blocking one of your use cases or your adoption of Knock, please contact our [support team](mailto:support@knock.app). No, currently we do not expose the object the subscription belongs to under the workflow run scope. No, currently the `actor` is **always** excluded from being a recipient in a workflow trigger if they are a subscriber to an Object recipient. No, currently we do not support [creating schedules](/concepts/schedules) for subscribers of an object. Each individual subscriber will need to be added as a recipient when creating the workflow schedule. Yes, you can. [Workflow cancellation](/send-notifications/canceling-workflows) requests can be scoped to one or more specific recipients. You can target any recipient who was notified via an object subscription, even if that recipient was not explicitly included in the workflow trigger request. ## Schedules Learn how to use Schedules to run workflows at set times for your recipients in a recurring or one-off manner. --- title: Schedules description: Learn how to use Schedules to run workflows at set times for your recipients in a recurring or one-off manner. tags: [ "crons", "schedules", "digest", "recurring", "weekly", "daily", "monthly", "schedule", ] section: Concepts --- A schedule allows you to automatically trigger a workflow at a given time for one or more recipients. You can think of a schedule as a managed, recipient-timezone-aware cron job that Knock will run on your behalf. Some examples of where you might reach for a schedule: - A digest notification where your users can select the frequency in which they wish to receive the digest (every day, every week, every month). - A reminder notification for a specific event or deadline, sent only once at a given date and time. ## How schedules work 1. [Create a workflow](/designing-workflows) that you wish to run in the future. 2. Using the API, [set a repeating schedule](#scheduling-workflows-with-recurring-schedules-for-recipients) or a [non-recurring schedule](#scheduling-workflows-with-one-off-non-recurring-schedules-for-recipients) for one or more recipients for the workflow. Knock will preemptively schedule workflow runs for the recipient(s) that you've provided, and execute those runs at the scheduled time. At the end of the workflow run (and in case of using a recurring schedule), a future scheduled workflow will be enqueued based on the recipient's next schedule. ## Scheduling workflows with recurring schedules for recipients To schedule a workflow for a recipient using recurring schedules, you must first have a valid, committed workflow in your environment. We can then set a schedule with `repeats` for one or more recipients (up to 100 at a time). ```typescript const { Knock } = require("@knocklabs/node"); const knock = new Knock(process.env.KNOCK_API_KEY); const schedules = await knock.workflows.createSchedules("park-alert", { recipients: ["jhammond", "esattler", "dnedry"], repeats: [ // Repeat daily at 9.30am only on weekdays { frequency: "daily", days: "weekdays", hours: 9, minutes: 30, }, ], ending_at: "2024-01-02T10:00:00Z", // Schedule will stop after this date data: { type: "dinosaurs-loose" }, tenant: "jpark", }); ``` ## Scheduling workflows with one-off, non-recurring schedules for recipients To schedule a workflow for a recipient using a non-recurring schedule, you must also have a valid and committed workflow in your environment. We can then set a schedule with the `scheduled_at` property, specifying the moment when this workflow should be executed. ```typescript const { Knock } = require("@knocklabs/node"); const knock = new Knock(process.env.KNOCK_API_KEY); const schedules = await knock.workflows.createSchedules("park-alert", { recipients: ["jhammond", "esattler", "dnedry"], scheduled_at: "2023-12-22T17:45:00Z", ending_at: "2023-12-31T23:59:59Z", // Schedule will not execute after this time data: { type: "dinosaurs-loose" }, tenant: "jpark", }); ``` ### Schedule properties | Variable | Type | Description | | -------------- | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `recipients` | RecipientIdentifier[] | One or more recipient identifiers, or complete recipients to be upserted. | | `workflow` | string | The workflow to trigger. | | `repeats` | ScheduleRepeat[] | A list of one or more repeats (see below). Required if you're creating a recurring schedule. | | `data` | map | Custom data to pass to every workflow trigger. | | `tenant` | string | A tenant to pass to the workflow trigger. | | `actor` | RecipientIdentifier | An identifier of an actor, or a complete actor to be upserted. | | `scheduled_at` | utc_datetime | A UTC datetime in ISO-8601 format representing the start moment for the recurring schedule, or the exact and only execution moment for the non-recurring schedule. | | `ending_at` | utc_datetime | A UTC datetime in ISO-8601 format that indicates when the schedule should end. Once the current schedule time passes `ending_at`, no further occurrences will be scheduled. | Note: when using an Object as a recipient for a scheduled workflow, only the object itself will receive the notification. Subscribers to that object will not be included. If you want to schedule workflows for subscribers of an object, you must add each subscriber individually as a recipient when creating the workflow schedule. } /> ### ScheduleRepeat properties | Variable | Type | Description | | -------------- | ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------- | | `frequency` | RepeatFrequency | The frequency in which this repeat schedule should run, one of monthly, weekly, daily, or hourly. | | `interval` | number (optional) | The interval in which the rule repeats. Defaults to 1. Setting to 2 with a `weekly` frequency would mean running every other week. | | `day_of_month` | number (optional) | The exact day of the month that this repeat should run. | | `days` | DaysOfWeek[], "weekdays", "weekends" | The days of the week that this repeat rule should run. Can provide "weekdays" or "weekends" as a shorthand. | | `hours` | number (optional) | The hour this schedule should run (in the recipient's timezone). Defaults to 00. | | `minutes` | number (optional) | The minute this repeat should run (in the recipient's timezone). Defaults to 00. | ## Modeling repeat behavior Every recurring schedule accepts one or more repeat rules, which allow you to express complex rules like: - Every Monday at 9am. - Every weekday at 10.30am. - Every other Monday, Tuesday, and Friday at 6pm. - Every year at midnight. A schedule repeat has the following type structure: ```typescript enum DaysOfWeek { Mon = "mon", Tue = "tue", Wed = "wed", Thu = "thu", Fri = "fri", Sat = "sat", Sun = "sun", } enum RepeatFrequency { Monthly = "monthly", Weekly = "weekly", Daily = "daily", Hourly = "hourly", } type ScheduleRepeatProperties = { frequency: RepeatFrequency; interval?: number; day_of_month?: number; days?: DaysOfWeek[] | "weekdays" | "weekends"; hours?: number; minutes?: number; }; ``` ### Example repeat rules To illustrate how to model a repeat rule, here are some common examples: **Every Monday at 9am** ``` { "frequency": "weekly", "days": ["mon"], "hours": 9 } ``` **Every weekday at 10.30am** ``` { "frequency": "weekly", "days": "weekdays", "hours": 10, "minutes": 30 } ``` **Every other Monday, Tuesday, and Friday at 6pm** ``` { "frequency": "weekly", "interval": 2, "days": ["mon", "tues", "fri"], "hours": 18, "minutes": 00 } ``` ## Updating schedules Up to 100 recipient schedules can be updated in a single call. Keep in mind that the properties passed in will be applied to all schedules. ```typescript title="Updating schedules" const { Knock } = require("@knocklabs/node"); const knock = new Knock(process.env.KNOCK_API_KEY); const schedules = await knock.workflows.updateSchedules({ schedule_ids: workflowScheduleIds, ending_at: "2024-06-01T00:00:00Z", // Update when the schedule should end data: { foo: "bar" }, }); ``` ## Removing schedules Up to 100 schedules can be deleted at a time, causing any already enqueued schedules to be cancelled for a recipient. ```typescript title="Removing schedules" const { Knock } = require("@knocklabs/node"); const knock = new Knock(process.env.KNOCK_API_KEY); const schedules = await knock.workflows.deleteSchedules({ schedule_ids: workflowScheduleIds, }); ``` ## Listing scheduled workflows Schedules can be listed per recipient (for a user or an object), or for an individual workflow: ```typescript title="Listing schedules for a user" const { Knock } = require("@knocklabs/node"); const knock = new Knock(process.env.KNOCK_API_KEY); const { entries: schedules } = await knock.users.getSchedules("sam"); ```
```typescript title="Listing schedules for a specific workflow" const { Knock } = require("@knocklabs/node"); const knock = new Knock(process.env.KNOCK_API_KEY); const { entries: schedules } = await knock.workflows.listSchedules( "workflow-key", ); ``` Schedules include a `next_occurrence_at` property which computes the **next time that a schedule will be executed**. Schedules also include a `last_occurrence_at` property which indicates when was the last time the schedule was executed. ## Workflow data in a scheduled workflow run Workflows in Knock are triggered either via an API call or via a Source event, both of which will pass the `data` associated. In the case of a scheduled workflow, the workflow will be triggered with an empty data payload by default. There are 2 ways in which to get data into each of your scheduled workflow runs: 1. **Define static data passed to every triggered workflow on a schedule.** We can include an optional `data` payload when we create our schedule. Any workflow runs triggered by that schedule will include the data payload within their workflow run scope. 2. **Fetch data from an HTTP endpoint to use in your workflow.** You can use an [fetch function step](/designing-workflows/fetch-function) to fetch data for a triggered scheduled workflow to "enrich" the data available with information from a remote server (via HTTP). ## Executing schedules in a recipient's timezone Knock supports a `timezone` property on the recipient that automatically makes a scheduled workflow run timezone aware, meaning you can express recurring schedules like "every monday at 9am in the recipient's timezone." Recipient timezones must be a [valid tz database time zone string](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), like `America/New_York`. [Read more about recipient timezone support](/concepts/recipients#recipient-timezones). Note: executing schedules in recipient timezones is currently only supported by{" "} recurring schedules. } /> ## Frequently asked questions You can use the `scheduled_at` attribute to start your schedule at a particular time in the future. You can use an HTTP fetch step to fetch data in your workflow as the first step to execute to fetch dynamic template data used in your workflow. When scheduling a workflow for one or more recipients, you can optionally provide a static set of `data` which will be passed to the invoked workflow. At any point before the scheduled workflow is invoked you can unschedule the workflow for one or more recipients. If a workflow has already run, then [normal workflow cancellation rules](/send-notifications/canceling-workflows) take effect. You'll see workflow runs that initiated from a scheduled workflow in the list of workflow runs. From there you can select the debugger and debug a given workflow. Currently no, but we'll be looking to add this feature in the near future. The `ending_at` parameter allows you to set an expiration time for both recurring and one-off schedules. For recurring schedules, no new occurrences will be scheduled after the `ending_at` time is reached. For one-off schedules, the schedule will not execute if the `scheduled_at` time is after the `ending_at` time. The `ending_at` time must be specified in UTC ISO-8601 format, for example: "2024-01-02T10:00:00Z". Yes, you can update the schedule to change from recurring to non-recurring (or vice versa). This can be done by removing the `repeats` property and setting `scheduled_at` to the desired one-time execution time. Scheduled workflow runs will always reference the workflow version that is current when the scheduled run is executed. Any scheduled workflow runs that are not already in flight when you commit your changes will use the updated workflow version. ## Tenants Learn how to use tenants to map your multi-tenant structure to Knock and power per-user, per-tenant notification experiences. --- title: Tenants description: Learn how to use tenants to map your multi-tenant structure to Knock and power per-user, per-tenant notification experiences. tags: ["tenant", "tenancy", "saas", "how knock works", "custom brand", "branding"] section: Concepts --- Tenants represent segments your users belong to. You might call these "accounts," "organizations," "workspaces," or similar. This is a common pattern in many SaaS applications: users have a single login joined to multiple tenants to represent their membership within each. You use tenants in Knock to: - Support a user having a separate notification feed per tenant - Apply per-tenant branding in emails - Define per-tenant preference defaults that apply to all users within that tenant - Apply per-user, per-tenant preferences - 🔜 Power per-tenant template overrides ## A conceptual model of tenants A tenant in Knock: - Is uniquely identified by an `id`, [per-environment](/concepts/variables). In most cases, this `id` is the same `uuid` used to identify the tenant in your system - Can have any number of custom properties - Can store branding overrides and preference defaults - Can be managed via the API Behind the scenes, a tenant in Knock is really just another{" "} Object in a special-system defined collection, $tenants. That means that anything you can do on an object you can do on a tenant. } /> By default, Knock will create a stub tenant object for all unique tenants that you trigger a workflow run for. You can also use the [tenant APIs](/reference#tenants) to create and manage tenant objects from your system to Knock. ## Associating workflow runs with a tenant It's important to note that tenants **do not** have a relationship to the [users](/concepts/users) and [objects](/concepts/objects) you've identified in Knock. That means Knock does not know _which tenant_ to associate with the set of users you're triggering a notification for. Instead, you must explicitly tell Knock as part of a workflow trigger to associate the workflow runs with a tenant. Tenants have a loose coupling to your users so Knock does not need to know anything about the roles and permissions model associated with your product. This means you have less data to synchronize to Knock and reduces the risk of drift between what's current in your system and what's reflected in Knock. If you need to model groups or lists of users, you can use our [subscriptions model](/concepts/subscriptions) to do that. Once a workflow run has been triggered with a `tenant`, the Knock workflow engine will do the following: - Find the tenant or create an empty `tenant` object if one does not exist - Expose that tenant object to the workflow run scope as a `tenant` variable - Associates all messages produced in the workflow run with the tenant - Applies any branding overrides to templates rendered - Applies any preference defaults to the recipient's preference set - Fetches any recipient-specific tenant preference sets ## Using tenant data in a workflow run The full tenant object will be exposed, including any custom properties, in the workflow run scope under the `tenant` [namespace](/designing-workflows/template-editor/variables#tenant). You can then use the tenant in a workflow to: - Add per-tenant-specific template changes, like custom messages or details. - Create per-tenant conditions to only trigger steps for particular tenants. ```markdown title="Using tenant data in a notification template" # Hello from {{ tenant.name }} This is a message directly from {{ tenant.name }} going to {{ recipient.email }}. ``` ## Syncing tenant data to Knock To get tenant data into Knock, we expose [various tenant-specific API methods](/reference#tenants). These methods make it possible to create or update a tenant, including any custom properties associated and any tenant settings, which include branding overrides and default preference sets. ### Required attributes | Property | Description | | -------- | ---------------------------------------- | | `id` | A string to uniquely identify the tenant | ### Optional attributes | Property | Description | | ---------- | ----------------------------------------------------- | | `name` | An optional name to associate with the tenant | | `*` | Any custom properties you wish to store on the tenant | | `settings` | A `TenantSettings` object to apply (see below) | ### `TenantSettings` | Property | Description | | --------------------------------- | ---------------------------------------------------------------------------------------------------------- | | `branding.primary_color` | A hex value for the primary color | | `branding.primary_color_contrast` | A hex value for the contrasting color to use with the primary color user | | `branding.logo_url` | A fully qualified URL for an image to use as the logo of this tenant | | `branding.icon_url` | A fully qualified URL for an image to use as the icon of this tenant | | `preference_set` | A complete `PreferenceSet` to use as a default for all recipients with workflows triggered for this tenant | ## Messages and tenants When a workflow is triggered with a `tenant` property, all of the [Messages](/concepts/messages) produced in the workflow run will be tagged with the `id` of the tenant. Tagging messages by the tenant makes it possible to query for tenant-specific messages in both the API and the dashboard. We also expose this behavior for in-app feed messages, making it possible to expose per-user, per-tenant feeds ([see below for an expanded guide](#scoping-in-app-feeds) on this usecase). ## Working with tenants in the Knock dashboard Tenant data is also exposed in the Knock dashboard. From the dashboard it's possible to: - View information about specific tenants, including custom properties set - View message logs of messages generated that were associated with the tenant - View workflow run information for all runs associated with the tenant - View and set custom branding settings - View default preferences set for a tenant You can find tenant information under the "Tenants" section in the left-hand menu of the dashboard. ## Guides for using tenants ### Scoping in-app feeds Multi-tenancy is important in your notification system when handling in-app feeds. Lets look at an example. Imagine that we have a SaaS application, Collaborato, where our users belong to one or more different workspaces. When one of our users is active in a current workspace, we want to make sure they only see notifications that are relevant for that workspace. That is, a user in the "Acme Fish Co." workspace should only see notifications that are relevant to "Acme Fish Co." #### Example To support this use case within Knock, we can pass a `tenant` identifier into our trigger calls. This `tenant` does not have to be configured in any way beforehand, it can simply be a unique identifier you choose to represent this group. When retrieving our feed to be displayed, we can then scope the feed to only show items relevant to the tenant: ```jsx title="Client-side feed scoping" // If you're using our `client-js` SDK: import Knock from "@knocklabs/client"; const knockClient = new Knock(process.env.KNOCK_PUBLIC_API_KEY); const feedClient = knockClient.feeds.initialize( process.env.KNOCK_FEED_CHANNEL_ID, { // Scope all requests to the current workspace tenant: currentWorkspace.id, }, ); // Or if you're using the React SDK: ... ; ``` By providing the `tenant` property here, we're letting Knock know that the notifications produced in the `trigger` call belong to a particular tenant and when we're showing the feed to our customers we **only** want to see the feed that's related to that tenant. Under the hood Knock will ensure that the badge counts you receive for the feed will be relevant only to the active workspace, and that no real-time notifications will be received for any messages that aren't relevant to the user. ### Custom branding Enterprise plan feature. Per-tenant branding is only available on our{" "} Enterprise plan. } /> You can use tenants to define default branding settings when sending email notifications that override your account-level brand settings. When you trigger a workflow with a `tenant`, it will use any settings defined on that tenant in place of the account-level brand settings to style your email layout steps. #### Example Let’s say you’re a hospitality company and own two boutique hotels, “The Black Lodge” and “The Great Northern.” Both want custom branding for their reservation update emails. First, we’ll want to add both of these hotels as `Tenants`. Navigate to the “Tenants” tab on the main sidebar of your dashboard and click “Create tenant." There, you’ll add a name and unique ID by which you'll reference the tenant when triggering notifications. You can also upload a logo, an icon, and select primary colors directly from the interface here. Now that the tenant is set up, when you trigger a workflow with an email step you can pass the ID for one of these tenants. It will override the account branding settings with the settings you configured for your tenant. If you want to send a reservation reminder to the guests of The Black Lodge, you can pass the ID you set for that hotel, `black-lodge`, into the tenant field of the workflow trigger option to override default account settings with those you've created for this tenant. ### Per-tenant user preferences and tenant preference defaults Enterprise plan feature. Per-tenant user preferences and tenant preference defaults are only available on our{" "} Enterprise plan. } /> Another advanced tenancy use case is managing different sets of preferences for each user-tenant pair. That is, a user may have different preferences configured for "Acme Fish Co." than they do for "Bell's Bagels," two hypothetical workspaces within our example collaboration app, Collaborato. We also support the ability to set per-tenant defaults, where an admin in a tenant within your product can set the default preferences for all users within that tenant. You can learn more about how to set per-tenant preferences and tenant preference defaults in [our preferences guide](/preferences/tenant-preferences). ## Frequently asked questions There are no limits associated with tenants. Yes, you can still use our APIs to work with tenant data, and trigger workflow runs for specific tenants. However, per-tenant preferences and custom branding are features gated for enterprise plans only. Knock does not know anything about the mapping between your users and your tenant entities, meaning you do not need to map user permissions. Absolutely, you can use a tenant as a `recipient` or `actor` in a workflow trigger by referencing it as an object with the structure `{ collection: "$tenants", id: "tenant-id" }`. Yes, you can subscribe recipients to a tenant by setting the collection of the object to subscribe to as `$tenants` and using the `id` of the tenant as the object id. We're currently working on this feature to create per-tenant template overrides at the workflow step level. If you're interested in being an early adopter of this feature, or this is blocking your adoption of Knock, [please get in touch](mailto:support@knock.app?subject=Per-tenant%20templates). While it's technically possible to create per-tenant workflows in Knock, we recommend not doing this where possible and opting to use our step conditions, preferences, and per-tenant templates to provide the customizations you need. The reason is creating and managing per-tenant workflows increases the surface area of the number of notifications you need to support, and more commonly what we've found from working with customers is there are more similarities between per-customer workflows than differences, which can usually be encapsulated in our workflow model. If you find that you have different needs here, we'd love to speak with you. Please [get in touch](mailto:support@knock.app) and we can arrange a consultation with a notification support specialist on the Knock team to walk through your use case. No, today it's only possible to have a single-level of hierarchy for your tenants. If you need to apply deeper hierarchy to your tenant objects, please [get in touch](mailto:support@knock.app) and we can discuss your use case further. ## Messages Learn how Knock models per-recipient notifications with Messages. --- title: Messages description: Learn how Knock models per-recipient notifications with Messages. tags: ["messages", "workflows"] section: Concepts --- ## An overview Some data is subject to retention policy enforcement. {" "} Message log data in the Dashboard and the public API are subject to retention policy enforcement. In-app message data and the Feeds API are not. See the{" "} data retention docs for more details on how Knock enforces this policy. } /> A Message in Knock represents a notification delivered to a [User](/concepts/users) or an [Object](/concepts/objects) on a particular channel. This is the core Knock data entity that your recipients will interact with when receiving notifications. Knock exposes a set of [Message APIs](/reference#messages) via which you can query for notifications and update messages individually or in batches. The Knock [Feeds API](/reference#feeds) is a specialized view of messages delivered to an in-app feed channel. The Knock dashboard makes available various message metadata to help you debug your notifications. This includes: - Information about the request that triggered the delivery of the message. - A preview of the message content as displayed for the recipient. - Logs of requests between Knock and your channel provider as Knock works to deliver the message to the recipient. - A timeline of message lifecycle events. ## Statuses Messages have two types of statuses. These are: - **Delivery statuses** — The delivery state of a message as reported by your channel provider. Delivery statuses are mutually exclusive and implicitly managed by Knock as part of notification delivery. - **Engagement statuses** — The way in which the recipient has interacted with the notification. A message can have multiple engagement statuses, and you can manage them yourself via the Knock API. Knock captures changes in message status as events that can be sent to [outbound webhooks](/developer-tools/outbound-webhooks/overview). To learn more, see our [message statuses guide](/send-notifications/message-statuses). ## Link and open tracking Knock provides opt-in, provider agnostic tracking capabilities for your notifications. With link tracking, Knock will capture link-click actions by your recipients as a message event. With open tracking, Knock will embed tracking pixels in email channel messages to help gauge when recipients are opening and reading your email notifications. To learn more, see the [Knock tracking guide](/send-notifications/tracking). ## Translations Learn how to use translations to localize your notifications. --- title: Translations description: Learn how to use translations to localize your notifications. tags: [ "translation", "translations", "translate", "locale", "localization", "l10n", "how knock works", "language", "i18n", "internationalization", ] section: Concepts --- [Translations](/mapi#translations-overview) localize the notifications you send with Knock. Enterprise plan feature. Translations are only available on our{" "} Enterprise plan. } /> ## Get started To get started, enable translations for your account. Go to “Settings” under your account name in the left sidebar and click “Enable translations”. Next you'll need to set a default `locale`. Knock uses the default `locale` when it can't find a translation for a given recipient’s `locale`. Once you've set your default `locale`, you should see a new “Translations” page under “Developers” in the sidebar. This is where you’ll be working with your translations. ## Basic usage [Translations](/mapi#translations-overview) are JSON objects that contain the text for your messages in various locales. For example, let’s say you have a customer order notification that you want to localize for French and English users. ```json title="en translation" { "OrderReady": "Your order is ready.", "OrderDelayed": "Your order is delayed." } ```
```json title="fr translation" { "OrderReady": "Votre commande est prête.", "OrderDelayed": "Votre commande est retardée." } ``` Once you have those translations created for the `en` and `fr` locales, you can reference their translation strings in your message templates using the `t` filter: ```json title="Message template editor"

{{ "OrderReady" | t }}

``` Your users must have a `locale` property set for the helper to find translations in their locale, otherwise Knock will use the default locale. You can set a user's `locale` with the [identify endpoint](/reference#identify-user). ## Translation methods: filter vs. tag There are two methods available to you to translate your message templates: the `t` filter and the `t` tag. The `t` filter is used to reference existing translation files. It works best when you have translations that are already created and you want to reference them in your message templates. ```json title="Using t filter in a message template" {{ "congratulationsMessage" | t: recipientName: recipient.name }} ``` In the example above, the `t` filter finds the recipient's `locale` and looks for the `congratulationsMessage` key in the translation file for that locale. It then replaces the `recipientName` variable with the recipient's name. The `t` tag is used to write templates in their default language and automatically generate translations for additional locales. It is best when you have less technical users authoring templates, and you want to automatically generate translations for their templates behind the scenes. ```json title="Using t tag in a message template" {% t %}Congratulations, {{ recipient.name }}!{% endt %} ``` In the example above, we author content in our English default language, wrap that content in our `t` tag, and Knock automatically generates translation files for us behind the scenes. We cover how to use the `t` filter and `t` tag in more detail below. ## Using the `t` filter You can use `t` filter to reference your translations from within a message template. The `t` filter also allows you to use variables, other filters, and special pluralization rules. ### Variables and interpolation You can use variable interpolation in your translations. ```json title="en translation" { "comment": "New comment from {{ actorName }} on your post {{ postName }}.", "like": "{{ actorName }} liked your photo {{ photoTitle}}!" } ``` You can pass variables to the `t` filter: ```json title="Message template editor"

{{ "like" | t: actorName: actor.name, photoTitle: likedPhoto.title }}

``` ### Pluralization Translations support pluralization rules. When you pass the `count` variable to a translation, it looks for pluralization keys in your translation. Those keys are `zero`, `one`, and `other`. You don’t need to reference these in the template. If you pass the `count` variable, it will evaluate it and choose one for you. ```json title="en translation" { "orders": { "shipping": { "zero": "You have no orders currently being shipped.", "one": "You have one order being shipped.", "other": "You have {{ count }} orders being shipped." } } } ``` To pluralize content in a message template, pass the `count` variable: ```json title="Message template editor"

{{ "orders.shipping" | t: count: count }}

``` - If the count is 0, it will choose `zero`, unless `zero` does not exist and then it will use `other`. - 1 corresponds to `one`, and everything else will fall under `others`. ### Other filters in combination You can still use other filters in combination with `t` but you’ll use them **after** you use the `t` filter. For example, to titlecase a translation: ```json title="Message template editor"

{{ "congratulationsMessage" | t | titlecase }}

``` ### Namespaced translations When you create a translation, you can supply an optional “namespace.” The namespace helps organize translations of the same locale so you can keep similar concepts together. Below you'll see examples of how to reference namespaced translations from your message templates. Let's start with a translation with a namespace of `shipping`: ```json title="en:shipping translation" { "backordered": "Your order has been backordered so shipping will be delayed.", "shipped": "Your order has been shipped.", "canceled": "Your shipment has been canceled." } ``` To access the contents of the `shipping` translation in your message template you’ll reference the namespace before the key followed by a colon (”:”): ```json title="Message template editor"

{{ "shipping:canceled" | t }}

``` This can be helpful if you use the `canceled` key else where in your translations so that there isn’t a collision. For example, if you had a `payments` translation like this: ```json title="en:payments translation" { "success": "Your payment has been processed.", "canceled": "Your payment was canceled." } ``` You would reference it with the `payments` namespace as well: ```json title="Message template editor"

{{ "payments:canceled" | t }}

``` And if you had a translation that wasn’t namespaced, say the `en` translation, you would simply use the key alone. All together in a template, it would look like this: ```json title="Message template editor"

Hello,

{{ "payments:canceled" | t }}

{{ "shipments:canceled" | t }}

{{ "canceled" | t }}

``` ### Nested translations You can create whatever JSON structure you need to hold your translations. Given the following translation: ```json title="en translation" { "customers": { "orders": { "beenReceived": "Have you received your order?", "survey": "How was your order?" }, "reminder": { "paymentInfo": "Remember to update your payment information!" } } } ``` You can access the content with dot-syntax like this: ```json title="Message template editor"

{{ "customers.orders.beenReceived" | t }}

``` The same goes for namespaced translations. If the above translation was in a translation named `services` , you would do the following: ```json title="Message template editor"

{{ "services:customers.orders.beenReceived" | t }}

``` ## Using the `t` tag Knock also provides an editor-friendly `t` tag which you can use to write templates in your default language. Translation files for any supported languages will be automatically generated in the background when you commit a workflow. Wrap content you want to translate in a t tag. Any content between the opening and closing t tags will be used as the content for your account's default locale. ```liquid title="Message template"

{% t %}Have you received your order?{% endt %}

```
After you commit your workflow, Knock will look for changes to your message templates and update a system translation file. Translation keys will be automatically generated based off of the content of the t tag. A Knock bot will commit these changes to your account with a message indicating which workflow generated the new translations. ```json title="System translation file" { "Have you received your order?": "Have you received your order?" } ``` You can then translate the default content into additional locales by manually editing your translation files or programmatically updating them using the Knock API and a translation service.
## Translation version control Translations follow the same version control flow in Knock as workflows and layouts. You create them in Development and then promote them to subsequent environments. You can archive translations that are no longer needed. Remember: in order to see translation updates in your template previews, you'll need to commit them to your development environment first. } /> ## Locale prioritization When Knock renders a template for a given user and encounters our `t` helper, it runs through the following locale prioritization: 1. Language + region (e.g. `fr-BE`) 2. Language (e.g. `fr`) 3. Default locale (e.g. `en`) Regional locales take precedence over language locales. If a translation is not found in the user’s locale, Knock will fall back to the default locale. ## Automate localization with our CLI In addition to working with translations in the Knock dashboard, you can programmatically create and update translations using the [Knock CLI](/developer-tools/knock-cli) or our [Management API](/developer-tools/management-api). If you manage your own translation files within your application, you can automate the creation and management of Knock translations so that they always reflect the state of the translation files you keep in your application code. The Knock CLI supports both JSON and the Portable Object (PO) file formats. When using PO files, the Knock CLI will handle converting between the Knock translation format and the PO format. The Knock CLI can also be used to commit changes and promote them to production, which means you can automate Knock translation management as [part of your CI/CD workflow](/developer-tools/integrating-into-cicd). ### Translation directory structure When translations are pulled from Knock, they are stored in directories named by their locale codes. Their filename will be their locale code. Any namespaced translations will prepend the namespace to the filename, with `.` used as a separator. ```txt title="Local translation files structure" translations/ ├── en/ │ ├── en.json │ └── admin.en.json └── en-GB/ ├── en-GB.json └── tasks.en-GB.json ``` If you're migrating your local translation files into Knock, you can arrange them using the file structure above and then push them into Knock with a single command using [`knock translation push --all`](/cli#translation-push). Each `.json` or `..json` file should follow the structure defined [here](/mapi#translations-object). You can learn more about automating translation management in the [Knock CLI reference](/cli). Feel free to contact us if you have questions. ## Supported locales Below is a list of the available locales to choose from for your translations. If you need one added, contact us at support@knock.app. ## Conditions Learn how Knock's conditions model provides dynamic control flow to your workflow runs. --- title: Conditions description: Learn how Knock's conditions model provides dynamic control flow to your workflow runs. tags: [ "triggers", "conditions", "conditionals", "steps", "channels", "workflows", "preferences", "conditional send", "routing", ] section: Concepts --- Knock uses conditions to model checks that determine variations in your [workflow](/designing-workflows) runs. They provide a powerful way to create more advanced notification logic flows. You can use conditions in three areas of the Knock model: 1. [**Step conditions**](/designing-workflows/step-conditions) — Used to determine if a single step in one of your workflows should execute during each workflow run. For example, only send an email if the preceding in-app notification has not yet been read or seen. 2. [**Channel conditions**](/integrations/overview#channel-conditions) — Used to determine if any step using the given channel should execute across all workflow runs. For example, only execute your Postmark email channel steps in your production environment. 3. [**Preference conditions**](/preferences/preference-conditions) — Used to determine the complete set of preferences available to the current workflow run. For example, allow a recipient to mute notifications for specific resources in your product. Each of these three cases share the same underlying data model and UI editor, which we outline in detail here. ## Condition types Knock's shared conditions model supports the following types of conditions: - **Data** — Evaluates against a property in the [workflow trigger](/send-notifications/triggering-workflows) data payload. - **Recipient** — Evaluates against a property on the workflow run [recipient](/concepts/recipients). - **Actor** — Evaluates against a property on the workflow run [actor](/send-notifications/triggering-workflows/api#attributing-the-action-to-a-user-or-object). - **Environment variable** — Evaluates against one of your [environment variables](/concepts/variables). - **Workflow** — Evaluates against a property of the currently executing workflow. - **Workflow run state** — Evaluates against a property of the current workflow run. - **Tenant** — Evaluates against a property on the [tenant](/concepts/tenants) associated with the current workflow run. - **Message status** — Evaluates against the [delivery status](/send-notifications/message-statuses#delivery-status) or [engagement status](/send-notifications/message-statuses#engagement-status) of a message from a previous step in the current workflow run. Message status conditions are only available when designing step-level conditions. {" "} They are not available for use with channel-level or preference-level conditions. You can learn more about how to work with message status conditions in our{" "} guide on step-level conditions . } /> ## Modeling conditions Knock models each condition as a combination of three properties: a `variable`, an `operator`, and an `argument`. This will feel familiar to boolean logic with infix operators in many modern programming languages. In our [JSON representation of a workflow](/mapi#workflows-object) this will look something like: ```json title="A workflow run condition" { "variable": "run.total_activities", "operator": "greater_than", "argument": "5" } ``` We also provide a [conditions editor](#the-conditions-editor) that provides some helpful UX abstractions on top of this model for building conditions in the Knock dashboard. ### Variables A condition variable is always a string formatted like `"."`. Knock uses the variable `prefix` to determine the condition type and the variable `path` to determine where to look up the data for evaluation. See the [conditions scope](#conditions-scope) for a list of available prefixes. ### Arguments Knock uses the condition argument as the expected value in the condition evaluation. Arguments can be either static values or dynamic properties. #### Static arguments Static arguments can be any of the following JSON literals: - Strings (`"foo"`, `"bar"`, `"baz"`) - Numbers (`1.0`, `2`, `10000`) - Booleans (`true`, `false`) - `null` Plus arrays of any of the above. #### Dynamic arguments Dynamic arguments are nearly identical to variables. Knock will expect a string formatted like `"."` and use the information within to resolve a value from some runtime data property. See the [conditions scope](#conditions-scope) for a list of available prefixes. ### Operators You can use any of the following operators in condition comparisons: | Operator | Description | | -------------------------- | ------------------------------------------------------------------------------------- | | `equal_to` | `==` | | `not_equal_to` | `!=` | | `greater_than` | `>` | | `greater_than_or_equal_to` | `>=` | | `less_than` | `<` | | `less_than_or_equal_to` | `<=` | | `contains` | `argument in variable` (works with strings and lists) | | `contains_all` | are all `argument` in `variable` (works with single arguments, or lists of arguments) | | `not_contains` | `argument not in variable` (works with strings and lists) | | `empty` | `variable in ["", null, []]` | | `not_empty` | `variable not in ["", null, []]` | Note: the `empty` and `not_empty` operators do not require a companion argument value in the condition, since Knock is checking for the absence of data from the variable path. ### Conditions scope Knock makes the following available to be used in a condition variable or dynamic argument: | Property | Description | | ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `data.` | A data condition, where `` is used to select a property from the workflow trigger data payload. | | `recipient.` | A recipient condition, where `` is used to select a property on the current recipient. [See full list of properties available](/designing-workflows/template-editor/variables#recipient-user-or-object). | | `actor.` | An actor condition, where `` is used to select a property on the current actor. [See full list of properties available](/designing-workflows/template-editor/variables#recipient-user-or-object). | | `vars.` | An environment variable condition, where `` is the name of one of your environment variables. | | `workflow.{id,name,categories}` | A workflow condition. | | `run.{total_activities,total_actors}` | A workflow run condition. | | `tenant.` | A tenant condition, where `` is used to select a property on the current tenant. [See full list of properties available](/designing-workflows/template-editor/variables#tenant). | | `refs..delivery_status` | A [message status condition](/designing-workflows/step-conditions#message-status-conditions) that evaluates against a message's [delivery status](/send-notifications/message-statuses#delivery-status), where `` identifies the preceding workflow step that generated the message. | | `refs..engagement_status` | A [message status condition](/designing-workflows/step-conditions#message-status-conditions) that evaluates against a message's [engagement status](/send-notifications/message-statuses#engagement-status), where `` identifies the preceding workflow step that generated the message. | In cases where data is not found at the path given by the variable, Knock falls back to an empty string as the default value. ### Combining conditions Preference conditions note. The following syntax does not apply to preference conditions. See the{" "} preference conditions FAQs {" "} for more information on combining multiple conditions on a preference. } /> You can combine multiple conditions together via either `AND` or `OR` operators. - `AND` combined conditions require all conditions to be true for the evaluation to pass. ```json title="JSON representation of AND combined conditions" "conditions": { // the AND operator is represented by the "all" key "all": [ { "argument": "true", "operator": "equal_to", "variable": "recipient.is_active" }, { "argument": "true", "operator": "equal_to", "variable": "actor.is_active" } ] } ``` - `OR` combined conditions require at least one condition to be true for the evaluation to pass. ```json title="JSON representation of OR combined conditions" "conditions": { // the OR operator is represented by the "any" key "any": [ { "argument": "true", "operator": "equal_to", "variable": "recipient.is_active" }, { "argument": "true", "operator": "equal_to", "variable": "actor.is_active" } ] } ``` - You may also use a combination of `AND` and `OR` operators to create more complex conditions. ```json title="JSON representation of OR plus AND combined conditions" "conditions": { "any": [ { "all": [ { "argument": "true", "operator": "equal_to", "variable": "recipient.is_active" }, { "argument": "true", "operator": "equal_to", "variable": "actor.is_active" } ] }, { "all": [ { "argument": "true", "operator": "equal_to", "variable": "data.force_delivery" } ] } ] } ``` ## The conditions editor The Knock Dashboard ships with a conditions editor that provides helpful abstractions on top of this data model. Rather than needing to remember how to format variables or name operators, Knock makes the appropriate options available to you. When creating or modifying a condition, you'll see: - A dropdown to select the condition type. Knock will use this option to determine the variable `` value. - An input or dropdown to provide the variable data path. - A dropdown to select the operator. - An input or dropdown to provide the argument data path.
Working with the conditions editor to build a recipient data condition.
Working with the conditions editor to build a recipient data condition.
You can also use the conditions editor to combine multiple conditions together via either `AND` or `OR` operators.
Managing condition groups in the conditions editor.
Managing condition groups in the conditions editor.
The condition editor is available for use in the [workflow step editor](/designing-workflows#the-workflow-canvas) and the [channel environment settings editor](/integrations/overview#per-environment-configurations). ## Debugging conditions Knock executes any step, channel, and preference conditions for each step within a workflow run. As part of execution, Knock captures detailed information about each condition evaluation for use in the [workflow debugger](/send-notifications/debugging-workflows). ### Debugging step and channel conditions Knock will display step and channel conditions evaluation results together in the step detail panel in the debugger. The overall evaluation result will show whether the step was skipped. For each individual condition within the set, Knock will show either: 1. **The condition evaluation result.** This will include any dynamically resolved variable and argument data captured at workflow run time. 2. **A "not evaluated" state.** This will occur when a preceding condition or group has determined the result, meaning subsequent conditions did not require full evaluation.
Debugging step and channel conditions.
Debugging step and channel conditions.
### Debugging preference conditions Knock will display any preference conditions evaluations just below the step and channel conditions results. Knock will group each condition evaluation by location within the resolved preference set. The overall evaluation result will show whether the recipient opted-out for the given workflow, category, or channel type.
Debugging preference conditions.
Debugging preference conditions.
## Variables Learn more about using shared Variables in Knock. --- title: "Variables" description: "Learn more about using shared Variables in Knock." tags: ["vars", "variables", "env vars", "secrets", "constants"] section: Concepts --- Variables within Knock let you set shared constants or secrets that you can use in all of the workflows and templates under your account. Variables can be overridden at the environment level to set per environment constants. ## Setting variables You can create account-wide variables under **Settings** > **Variables**. Each variable has a `key` and a `value`. The key is how you'll reference the variable in your templates, conditions, and preference conditions when building your workflows. ## Setting secret variables By default, any variables you set are created as public. Public variables are exposed via the [user feed endpoint](/reference#get-feed) and are always visible within the dashboard by all team members. If you're working with variables that should not be exposed you can create them as secret variables by toggling the "Make variable secret" slider when creating a variable. Secret variables are _never_ revealed in the dashboard (all values are obfuscated) and are _never_ exposed via the API. ## Accessing variables Variables are available to be accessed under the `vars` namespace within your templates, step conditions, and preference conditions. For instance, if you set a variable with the key `base_url` you can access that variable under `vars.base_url`. ## Overriding variables per-environment You can optionally set environment-specific values for your variables. To do so, go to the **Settings** > **Variables** section of the dashboard, click the three dots for a specific variable to select "Edit variable," and set the value for the environment you wish to override. ## Setting JSON in variables Your variables can optionally contain JSON, which will be parsed when a variable is executed. For instance, if you want to set a dynamic batch window for each environment you can set a per-environment variable to contain `{ "unit": "seconds", "value": 30 }`. Please note: variables will _always_ be parsed as JSON first, before falling back to being processed as a string. ## Audiences (Beta) Learn how to use Audiences to power your lifecycle marketing use cases. --- title: Audiences description: Learn how to use Audiences to power your lifecycle marketing use cases. tags: [] section: Concepts --- Audiences is currently in beta. If you'd like early access, or this is blocking your adoption of Knock, please{" "} get in touch . } /> An Audience is a user segment that you can notify. You can bring audiences into Knock programmatically with our API or a supported reverse-ETL source. Once you start creating audiences in Knock, you can use them to: - trigger workflows for lifecycle messaging (such as new user signups) and transactional messaging (such as payment method updates) - orchestrate branch and conditional logic within your workflows using audience membership (e.g. if a user is in a `paid users` audience, opt them out of the workflow) ## Creating an audience Navigate to the **Audiences** section on the Knock dashboard’s sidebar, then click “Create Audience” in the top right corner. ### Using audiences across environments When you create an audience, its key instantly exists across all environments. Any users added to an audience are scoped to a specific environment. Audiences do not follow Knock version control and do not need to be committed or promoted to environments.{" "}Learn more about environments. Audience selector in the workflow editor ## Using audiences with workflows ### Triggering workflows Workflows can be configured to trigger for every new member added to an audience. Create or open the workflow you’d like to trigger for your audience, then open the workflow editor. Click on the “Trigger” step, then click “Edit trigger type” in the top right corner. Click “Audience“ and then select the audience you’d like this workflow to trigger from. Audience trigger type config in the workflow editor Commit your workflow to development, and when you’re ready promote it to production. At this point, every time a user is added to the selected audience a workflow will be triggered with that user as a recipient. Remember: audiences are environment scoped. This means the workflow will run in the environment where the user was added to the audience. If you use a production API key to add users to an audience in production, your workflow will trigger in the production environment.{" "} Learn more about environments. } /> ### Audience conditions Audience membership can be checked in [branch](/designing-workflows/branch-function#adding-conditions-to-branches) and [step conditions](/designing-workflows/step-conditions). Create a condition, then select “Audience membership” as the type. When the condition is evaluated during workflow execution it will check if the recipient is a member of the selected audience. Audience condition type config in the workflow editor ## Populating an audience Before populating your audience ensure that your user data has been [identified in Knock](/managing-recipients/identifying-recipients) and that you’ve configured and promoted any workflows you want to trigger with the Audience. ### Supported reverse ETL vendors Audiences can easily be synced from Hightouch Models and Census Segments by configuring Knock as a sync destination. Please reach out to support@knock.app for beta access to our rETL integrations with Hightouch and Census. ### Audiences API The Knock API can be used to sync audiences from any data warehouse or reverse ETL system. Create the audience in the Knock dashboard, then use the add and remove API operations to power your sync. The API is designed for batch processing and accepts payloads of up to 1,000 members at a time. For more information see the [audiences API docs](/reference#audiences). ## Using audiences with tenants When adding users to an audience you can optionally include a tenant ID to power per-user, per-tenant workflows. A user can exist in an audience with multiple distinct tenants. An audience member with multiple distinct tenant ids When a workflow triggers from an audience entry event, the tenant ID provided for the member will be passed along to the workflow trigger. If no tenant ID is provided in the API request, the workflow will run with no tenant data. If the same user is added with multiple distinct tenants, the workflow will trigger each time by default. To configure this behavior use [trigger frequency](/send-notifications/triggering-workflows#controlling-workflow-trigger-frequency) controls. Tenancy is also taken into account when checking audience membership. For a recipient to be considered a member of an audience during workflow execution, the tenant ID provided with the trigger data must match the user’s audience membership record. If no tenant ID was provided with the trigger, the user must have been added to the audience with no tenant ID. ## Frequently asked questions If you add a user to an Audience who has not yet been identified to Knock, they will be indicated as a "missing user" in the audience. If you subsequently identify a user with the missing `user_id`, they will be a member of the audience and no longer "missing." However, Knock will not retroactively trigger any audience-entry triggered workflows for users that are identified after being added to the audience. # Designing workflows Learn how to design notifications using Knock's workflow builder, then explore advanced features such as batching, delays, and more. ## Overview Learn more about how to design and create powerful cross-channel notification workflows in Knock. --- title: Designing workflows description: Learn more about how to design and create powerful cross-channel notification workflows in Knock. tags: ["steps", "workflows", "functions"] section: Designing workflows --- The Knock workflow builder enables you to craft notification workflows that combine functions, channels, and conditional logic to determine which of your users to notify across which channels when a given event takes place in your product. ## How the Knock notification engine works As you start to dig into workflows, it's helpful to understand the basics of what happens in Knock when you [trigger a workflow](/send-notifications/triggering-workflows). When Knock receives a workflow trigger (like the one below) for one of your workflows, it will produce a **workflow run** for **each recipient** you send in your workflow trigger. ```js title="A workflow trigger for three recipients" await knock.workflows.trigger("comment-created", { // The user who performed the action (optional) actor: "user_0", // The list of recipients recipients: ["user_1", "user_2", "user_3"], // Data to be passed to the template data: { page_name: "Marketing brief", comment_body: "Hey team — can we take another look at this?", }, }); ``` In the example above we've included three recipients, so our workflow trigger will produce three separate workflow runs. ## The workflow canvas All Knock workflows consist of three basic parts: - A **trigger step** that starts the workflow - **Channel steps** that send notifications to your configured channels - **Function steps** that control the flow of the workflow and produce state for use in templates ### The trigger step Every workflow starts with a trigger step. When you want to run a workflow, you send a trigger call to the Knock API with an `actor`, a list of `recipients`, and a `data` payload with any information you want to use in the notification templates of the workflow. (More on this in [triggering workflows](/send-notifications/triggering-workflows).) When the workflow is triggered, it creates a workflow run for each of the `recipients` passed in the trigger call. A trigger step can optionally have [conditions](/designing-workflows/step-conditions), which determine if the workflow should execute. When the conditions on the trigger step are not met, the workflow will terminate. ### Channel steps A channel step sends a notification to a recipient. When the workflow engine reaches a channel step, it looks for relevant channel data on the recipient. As an example, an email channel step will look for the `email` property on the recipient. If no relevant channel data for that recipient is found, the step is skipped. If channel data is found, then the step will send a notification. Each channel has a notification template (designed by you in the Knock dashboard) which inserts the `data` from your trigger call into a [styled template](/send-notifications/designing-workflows/template-editor) for that step's given channel. You can add any of the major [channel types supported by Knock](/integrations/overview#supported-channel-providers) into your workflow. By default, we show all of our supported channel types, but you'll need to configure a provider with each channel before you can actually use them in a workflow. For more information on how to configure channels in your Knock account, see our [integration guides](/integrations/overview). ### Function steps A function is a step in a workflow that does something to the data being passed in your trigger call. You can use functions by entering the workflow builder and adding function steps onto the canvas. We currently support the following functions: - [Batch](/send-notifications/designing-workflows/batch-function) (aggregate trigger calls that have the same value for a specified batch key) - [Branch](/send-notifications/designing-workflows/branch-function) (evaluate conditions to determine which path a workflow should take) - [Delay](/send-notifications/designing-workflows/delay-function) (wait an amount of time before proceeding to the next workflow step) - [Fetch](/send-notifications/designing-workflows/fetch-function) (execute an HTTP request to fetch additional data for a workflow) - [Throttle](/send-notifications/designing-workflows/throttle-function) (limits the number of executions of the workflow for the recipient over a window of time) - [Trigger workflow](/send-notifications/designing-workflows/trigger-workflow-function) (execute a nested workflow with trigger data derived from parent workflow data and environment variables) ## Step conditions Each workflow step can have one or more conditions that determine, at workflow execution time, if the step should execute. Conditions are one way you can add control flow logic to your notification workflows. [Read more about step conditions](/send-notifications/designing-workflows/step-conditions). ## Delay function Learn more about the delay workflow function within Knock's notification engine. --- title: Delay function description: Learn more about the delay workflow function within Knock's notification engine. tags: ["steps", "delays", "wait", "functions"] section: Designing workflows --- A delay function does just what it sounds like: it delays the execution of the workflow for some amount of time, then proceeds to the next step. There are three types of delays we support in Knock today: "wait for fixed interval", "wait for a dynamic period", and "wait until a relative timestamp." ## Wait for a fixed interval The "wait for fixed interval" delay type waits for an interval of time (provided by you in the workflow editor) and then proceeds to the next step. Fixed interval delay functions are helpful for the following use cases: - Check to see if a user's seen or read an in-app message before sending an email - Remind a user about a pending invite they haven't accepted ## Wait for a dynamic period You can also set the length of your delay dynamically using a variable. You can use any of the data, recipient, actor, or environment variables associated with the workflow run to set your duration. When specifying a dynamic delay period you must provide one of the following: - An [ISO-8601 timestamp](https://en.wikipedia.org/wiki/ISO_8601) (e.g. `2022-05-04T20:34:07Z`) which must be a datetime in the future - A duration unit (e.g `{ "unit": "seconds", "value": 30 }`) - A window rule (e.g `{ "frequency": "daily", "hours": 9, "minutes": 30 }`) A dynamic delay must be available to be resolved via the `key` you specify on the given schema, meaning that if you specify a key of `delayUntil` in your `data` schema, your workflow trigger data must contain either an ISO-8601 timestamp, a valid duration unit, or a valid window rule. When the key specified is missing or resolves to an invalid value, a corresponding error will be logged on the workflow run, and the delay will be **skipped**. Timestamp-based delays are helpful for reminders about resources in your product that need to be completed or addressed by a specific point in time. As an example, if a user has a task that's due three days from now and you want to remind them 24 hours before it's due, you can set a timestamp delay for the task's due date minus 24 hours. #### An example timestamp ```json title="Setting a delay until timestamp" { "delayUntil": "2024-01-05T14:00:00Z" } ``` You can then reference that in your delay step settings as `data.delayUntil`. A duration will take the current time that the delay step is executing and add the duration to it to produce the delay until time. A duration object is an entity that you can set on recipients, tenants, environment variables, or in your data payload and reference on your delay step. #### The duration schema ```typescript title="A relative duration" type Duration = { unit: "seconds" | "minutes" | "hours" | "days" | "weeks"; value: number; }; ``` #### An example duration Let's say you want to express a duration that delays for 15 minutes, here's how you structure that: ```json title="Setting a duration" { "delayDuration": { "unit": "minutes", "value": 15 } } ``` You then reference that as `data.delayDuration` in the delay step configuration. A window rule determines a dynamic interval for when the delay should close. It allows you to express rules like "delay until Monday at 9am". The window rule will always be evaluated in the [recipient's timezone](/concepts/recipients#recipient-timezones) (when set) and will fall back to the account default timezone, or "Etc/UTC". #### The window rule schema ```typescript title="A window rule" type WindowRule = { frequency: "hourly" | "daily" | "weekly" | "monthly", // The specific days the rule is valid on days?: Array<"mon" | "tue" | "wed" | "thu" | "fri" | "sat" | "sun"> | "weekdays" | "weekends", // The hour which the rule should evaluate (defaults to 0) hours?: number, // The minute at which the rule should evaluate (default to 0) minutes?: number, // What day of the month should this rule execute (useful when monthly) day_of_month?: number, // How often should this rule repeat? Defaults to 1 interval?: number }; ``` #### Example window rule Let's say you want to express setting a window rule for delaying until Monday at 9am, here's how you might structure that on your recipient: ```json title="Recipient delay window" { "delayUntil": { "frequency": "weekly", "days": ["mon"], "hours": 9 } } ``` Now you can set the delay window key to `recipient.delayUntil` to reference this window rule. ## Wait until a relative timestamp You can use our relative delay to wait some time before or after a timestamp that you provide in your workflow payload. This computes a delay time for a fixed interval relative to a dynamic timestamp. Relative delay functions are helpful for various scenarios, including: - Appointment reminders: send a notification one day before an appointment time - Follow-up reminders: send a follow-up message two hours after an event When configuring a relative delay, you'll specify: - A fixed delay interval (provided by you in the workflow editor) - Whether the delay should occur before or after the dynamic timestamp - The `key` for the dynamic timestamp (which can come from your trigger data, recipient data, or other sources) As in the dynamic delay section above, the key specified must be available to be resolved. If the key is missing or resolves to an invalid value, a corresponding error will be logged on the workflow run, and the delay will be skipped. ## Using workflow cancellation with delays In cases where you're waiting to see if a user will complete an action before sending a notification, you can use our [workflow cancellation API](/send-notifications/canceling-workflows) to ensure a user doesn't receive an unnecessary reminder. If the user completes the action you were going to remind them about, cancel the workflow to keep any additional notifications from being sent. ## Frequently asked questions Often when you're testing your Knock workflows, you'll want your delay durations to be shorter in non-production environments to aid with testing. To set per-environment delay duration you can: - Create a new variable under **Settings** > **Variables** with a relative duration as JSON (`{ "unit": "seconds", "value": 30 }`) and a name of `delayDuration`. You can set per-environment values to specify a shorter or longer window as needed - Set your delay duration to "Wait until a dynamic interval" - Specify that your delay duration will come from an environment variable - Set the key to `delayDuration`, which will resolve the delay duration from the variable you created You can use the [workflow cancellation API](/send-notifications/canceling-workflows) to cancel a delayed workflow. You must use a unique cancellation key to cancel a previously triggered workflow run. A workflow can be delayed for a maximum of 365 days (1 year). Knock will ensure that your delayed workflow run will execute within ~1 - 5s of the delayed time. We currently don't have a way to view all delayed workflow runs with pending messages. If this is a feature you need, please reach out as we'd love to hear your use case. Workflow recipient runs will always reference the workflow version that was current when the run was triggered, so your changes will not be reflected in workflow runs that are already in flight. If you need to stop a delayed workflow run because you've updated your workflow, you can use the [workflow cancellation API](/send-notifications/canceling-workflows). ## Batch function Learn more about the batch workflow function within Knock's notification engine. --- title: Batch function description: Learn more about the batch workflow function within Knock's notification engine. tags: ["steps", "batch", "batched messages", "batching", "digests", "functions"] section: Designing workflows --- A batch function collects notifications that have to do with the same subject, so you can send fewer notifications to your users. Batch functions are helpful when a recipient needs to be notified about a lot of activity happening at once, but doesn't need a notification for every single activity within the batch. Commenting is a common use case. If a user leaves ten comments in a page in fifteen minutes, you don't want to send the user ten separate notifications. You want to send them one notification about the ten comments they just received. ## How batching works Here's a step-by-step breakdown of how a batch function works: - When a given per-recipient workflow run hits a batch step, the batch function will stay open for an interval of time which you define (the [batch window](#setting-the-batch-window)). - While that interval is open, the batch function aggregates any additional incoming triggers **for that recipient**. If a [batch key](#selecting-a-batch-key) is provided in your batch step, the incoming triggers **for that recipient** will be grouped into **separate batches based on batch key.** - When the batch window interval closes, the workflow continues to the next step, with the data collected in the batch available in the workflow run scope. You can read more in the ["using batch variables" section](#using-batch-variables) of this guide. By default the batch will only return the first (or last) 10 items to be rendered in your template. This limit can be configured on our Enterprise Plan (up to 100 items). The batch can, however, accumulate{" "} any number of items over the window that it's open. } /> ## Selecting a batch key A batch function always batches incoming notifications **per recipient**. If you do not provide a batch key, your batch function will just batch per recipient. If you do provide a batch key, your batch function will batch by recipient and then by your batch key. A batch key resolves to a value in your `data` payload by which to group incoming notifications. A quick tip. Here's a helpful way to think about batching. By default the batch function batches on a key of{" "} recipient_id. When a batch key is provided, it batches on a key of concat(recipient_id, batch_key). } /> As an example, in a document editing app where a recipient is receiving notifications about activity across different pages, you can provide a batch key of `page_id` and the user will receive different batch notifications about each page that was included in the batch.
Using the batch function to batch new comment notifications by page.
Using the batch function to batch new comment notifications by page.
Here's a detailed walkthrough of how this example might work in practice: - You have a `new-comment` workflow that includes a batch step. - You send six trigger calls to that workflow: three about `page A` and three about `page B`. The trigger calls are all for the same recipient Elmo. - If your batch step does not have a batch key, Elmo will receive a batched notification about six activities. - If your batch step includes a batch key of `page_id`, Elmo will receive two notifications: one for the three activities about `page A` and one for the three activities about `page B`. ## Setting the batch window The batch window determines the length of time that the batch will be open, with the window opening from the **first** time the batch is triggered. ### Set a fixed batch window You can set a fixed duration batch window using the "Batch for a fixed window" option in the batch step. The window accepts a relative duration, which can be specified in seconds, minutes, hours, or days. The batch is opened when it is first triggered for a given recipient. The batch is closed after the fixed duration of time has elapsed. ### Set a dynamic batch window using a variable You can also set the length of your batch windows dynamically using a variable. You can use any of the data, recipient, actor, or environment variables associated with the workflow run to retrieve your dynamic batch window. When specifying a dynamic batch window you must provide one of the following: - An [ISO-8601 timestamp](https://en.wikipedia.org/wiki/ISO_8601) (e.g. `2022-05-04T20:34:07Z`) which must be a datetime in the future - A relative duration (e.g `{ "unit": "seconds", "value": 30 }`) - A window rule (e.g `{ "frequency": "daily", "hours": 9, "minutes": 30 }`) A dynamic window must be available to be resolved via the `key` you specify on the given schema, meaning that if you specify a key of `batchWindow` in your `data` schema, your workflow trigger data must contain either an ISO-8601 timestamp, a valid duration unit, or a valid window rule. When the key specified is missing or resolves to an invalid value, a corresponding error will be logged on the workflow run, and the batch will be **skipped**. A fixed timestamp will tell Knock to close the batch window at the exact date time you provide. It must be a valid ISO-8601 timestamp in the future. #### An example timestamp ```json title="Setting a batch until timestamp" { "batchUntil": "2024-01-05T14:00:00Z" } ``` You can then reference that in your batch step settings as `data.batchUntil`. A duration will take the current time that the batch step is executing and add the duration to it to produce the batch window closing time. A duration object is an entity that you can set on recipients, tenants, environment variables, or in your data payload and reference on your batch window. #### The duration schema ```typescript title="A relative duration" type Duration = { unit: "seconds" | "minutes" | "hours" | "days" | "weeks"; value: number; }; ``` #### An example duration Let's say you want to express a duration that will always close a batch window 1 day after the batch is started, here's how you structure that: ```json title="Setting a duration" { "batchDuration": { "unit": "days", "value": 1 } } ``` You then reference that as `data.batchDuration` in the batch step configuration. A window rule determines when the next occurrence of the batch window should be executed. It allows you to express rules like "batch until Monday at 9am", or "keep the batch window open for 2 weeks until the next Friday." The window rule will always be evaluated in the [recipient's timezone](/concepts/recipients#recipient-timezones) (when set) and will fall back to the account default timezone, or "Etc/UTC". #### The window rule schema ```typescript title="A window rule" type WindowRule = { frequency: "hourly" | "daily" | "weekly" | "monthly", // The specific days the rule is valid on days?: Array<"mon" | "tue" | "wed" | "thu" | "fri" | "sat" | "sun"> | "weekdays" | "weekends", // The hour which the rule should evaluate (defaults to 0) hours?: number, // The minute at which the rule should evaluate (default to 0) minutes?: number, // What day of the month should this rule execute (useful when monthly) day_of_month?: number, // How often should this rule repeat? Defaults to 1 interval?: number }; ``` #### Example window rule Let's say you want to express setting a window rule for batching weekly on a Monday at 9am, here's how you might structure that on your recipient: ```json title="Recipient batch window" { "batchWindow": { "frequency": "weekly", "days": ["mon"], "hours": 9 } } ``` And now you can set the batch window key to `recipient.batchWindow` to reference this window rule. **Please note**: an open batch window will never be extended by a subsequent workflow trigger with a different dynamic batch window specified. Once a given batch has been opened by a workflow trigger, its window interval is immutable. When the key specified is missing, or resolves to an invalid value, a corresponding error will be logged on the workflow run and the batch will be **skipped**. ### Using a sliding batch window By default, all batch windows are fixed, where the closing of the batch window is determined by the first trigger that starts the batch. In some situations, you may wish to "extend" the batch window when a new trigger is received to recompute the closing time of the batch. This option is supported in the batch step as a "sliding window." When a sliding window is enabled on a batch function, subsequent workflow triggers that are detected by the already-open batch window will add the configured default window duration onto the already-open batch window. Let's walk through an example: - 🎛️ [Initial batch window: 1 minute] - Trigger: the batch opens with a closing window of `now() + 1 min` - ⏲️ [30 seconds pass] - Trigger: new item added to the batch, the closing window is recomputed to be `now() + 1 min`, a total of 1 minute and 30 seconds from when the batch was opened - ⏲️ [1 minute passes] - The batch closes after 1 minute and 30 seconds #### Setting a maximum batch window duration When using a sliding batch window, you must set an extension limit for the batch. This value represents the maximum amount of time that a batch window can remain open if it is extended by subsequent workflow triggers. This "Max window limit" option is displayed once you enable a sliding window by selecting "Extend window when new activities are received," and can be set as any duration unit. Once configured, Knock will compute the maximum extended batch window for subsequent triggers as the time your batch was initially opened plus the maximum window duration. For example: - 🎛️ [Initial batch window: 12 hours] - 🎛️ [Max extension limit: 24 hours] - Trigger: the batch opens with a closing window of `now() + 12 hrs` - ⏲️ [6 hours pass] - Trigger: new item added to the batch, the closing window is recomputed to be `now() + 12 hrs`, a total of 18 hours from when the batch was opened - ⏲️ [Another 7 hours pass] - Trigger: new item added to the batch, the closing window is recomputed to be `now() + 12 hrs`, which would be a total of 25 hours. Because this exceeds the maximum extension limit, the window is set to close 24 hours after it was opened - ⏲️ [Another 3 hours pass] - Trigger: new item added to the batch. The closing window is not recomputed because the maximum extension has already been reached - ⏲️ [Another 8 hours pass] - The batch closes after 24 hours If you configure your maximum window with a value that is _less_ than the initial window duration, subsequent batched triggers will shorten the overall window. If this new maximum duration has already elapsed, the batch window will immediately close and the workflow run will proceed. - 🎛️ [Initial batch window: 24 hours] - 🎛️ [Max extension limit: 12 hours] - Trigger: the batch opens with a closing window of `now() + 24 hrs` - ⏲️ [23 hours pass] - Trigger: new item added to the batch, the closing window is recomputed to be `now() + 24 hrs`, a total of 47 hours from when the batch was opened. This exceeds the configured maximum of 12 hours, so the window is set to close 12 hours after it was opened - Because 12 hours have already elapsed, the batch window closes immediately (after 23 hours have elapsed) To avoid confusion, we recommend always choosing a max extension limit duration that is greater than your initial batch window duration. ## Setting the maximum activity limit Optionally, you can also set a maximum limit for the number of activities allowed to be accumulated in a given batch, at anywhere between 2 and 1000 activities. When this option is set, your batch window will close as soon as the number of activities accumulated in the batch reaches the maximum limit set, regardless of the amount of time remaining in its fixed or sliding batch window. ## Setting the batch order Although batches will accumulate every activity added to the batch, only ten items will be returned in `activities` once the batch step window closes. There are two options for which ten activity objects will be returned when the batch step closes: - **The first ten (default):** The ten oldest activity objects added to the batch step will be returned. - **The last ten:** The ten newest activity objects added to the batch will be returned. Note that for both settings, the `activities` variable will always be sorted in chronological order (oldest to most recent). ## Immediately flushing the first item in a batch Batch steps optionally support a mode to immediately flush the first item in a batch. This mode is useful when you want to immediately notify a user about the first item in a batch, and then accumulate additional items over a window of time. To enable this mode, you can toggle on "Immediately flush leading item" in the "Advanced settings" section of the batch step. When this mode is enabled, the first item for an unopened batch will "open" the batch and the usual batching rules will apply. However, unlike a normal batch, the first item will **not be included in the `activities` of the batch** and will instead continue execution past the batch step. If you want to branch on whether the first item in a batch was flushed or not, you can use the `total_activities` variable to do so. When it is set to 1, you know that you're working with the first item in a batch. Please note: if there is never a second item added to the batch, the batch will noop on closing as there is nothing in it to execute. } /> ## Working with batches in your templates Another important aspect of batch functions is that they generate state that can be used in your templates. Let's continue the commenting example we used above. In this scenario, we'll want different copy in our notification for when a batch includes one item ("Jane left a comment") v. when a batch includes more than one item ("Jane left _n_ comments"). We can address use cases like this by referencing the `total_activities` variable within our workflow. Here's an example of a message template that uses this variable to determine what type of copy to use: ```markdown {% if total_activities > 1 %} {{ actor.name}} left {{ total_activities }} comments on {{ page_name }} {% else %} {{ actor.name}} left a comment on {{ page_name }}. {% endif %} ``` Here's a list of the variables that you can use to work with batch-related state. - `total_activities`. The number of activities included within the batch. (An example: In the notification "Dennis Nedry left 8 comments for you", the `total_activities` count equals eight). - `total_actors`. The number of unique actors that triggered activities included within the batch. (An example: In the notification "Dennis Nedry and two others left comments for you", the `total_actors` count equals three, Dennis plus the two others you mentioned in the notification). - `activities`. A list of up to ten of the activity objects included within the batch, where each activity equals the state sent across in your trigger call. The `activities` variable lists the _first_ or _last_ ten activity objects added to the batch (configurable by setting the [batch order](#setting-the-batch-order)). Each activity includes any data properties you sent along in the trigger call, as well as any user properties for your actor and recipient(s). You can use the activities variable to create templates like this: ``` {% for activity in activities %}

{{ activity.actor.name }} commented on {{ activity.pageName }} with:

{{ activity.content }}
{% endfor %} ``` - `actors`. A list of up to ten of the unique actors included within the batch, where each actor is a user object with the properties available on your Knock user schema. The `actors` variable lists the _first_ or _last_ ten actors added to the batch. ### Setting the batch render limit (beyond 10) Enterprise plan feature. The render limit setting for batch activities and actors is only available on our{" "} Enterprise plan. } /> By default, up to ten items will be returned in `activities` and `actors` variables inside your templates after the batch window closes. On the Enterprise plan, you can configure the maximum number of `activities` and `actors` to be rendered in your templates beyond the default limit of 10, to any number between 2 and 100. ## Using workflow cancellation with batches If you want to remove an item from a batch (example: a user deletes a comment), you can use our [workflow cancellation API](/send-notifications/canceling-workflows) to cancel a batched item, thereby removing it from the batch. Important: Once a batch window has been opened, it will remain open until its full duration has elapsed. Any workflow cancellation will remove the specific individual workflow run that it references from the batch.

Because of this behavior, it's important to remember that canceling a workflow run that opened a batch window will never close the batch window itself. Any subsequent triggers to that recipient/workflow key combination will add activities to the open batch, and those activities will proceed when the batch window closes if their respective workflow runs are not also canceled. See the FAQs below for a workaround to close an open batch window. } /> --- ## Frequently asked questions Often when you're testing your Knock workflows, you'll want your batch windows to be shorter in non-production environments to aid with testing. To set per-environment batch windows you can: - Create a new variable under **Settings** > **Variables** with a relative duration as JSON (`{ "unit": "seconds", "value": 30 }`) and a name of `batchWindow`. You can set per-environment values to specify a shorter or longer window as needed - Set your batch window to "Batch for a dynamic interval" - Specify that your batch window will come from an environment variable - Set the key to be `batchWindow`, which will resolve the batch window from the variable you created Right now we don't offer a way to close a batch from a workflow trigger. One workaround is to use a [sliding batch window](/designing-workflows/batch-function#using-a-sliding-batch-window) and then set the max extension window to be a very small duration (i.e. 1 second), meaning that the batch will immediately close when a subsequent trigger occurs. You can use the [workflow cancellation API](/send-notifications/canceling-workflows) to remove an item that has been accumulated into an active batch. If all items have been removed from the batch when its window closes, any channel steps proceeding will be skipped. A batch can support an unbounded number of items per recipient, although we will only ever return either the first 10 or last 10 items to be rendered in your template. On Enterprise plan, you can configure to include up to 100 via the [render limit setting](/designing-workflows/batch-function#setting-the-batch-render-limit-beyond-10). We will by default expose at most 10 activities to your template rendered in your batch (available under the `activities` variable). The `total_activities` will always include the total amount of bundled activities in the batch. On Enterprise plan, you can configure to include up to 100 via the [render limit setting](/designing-workflows/batch-function#setting-the-batch-render-limit-beyond-10). You can use the "Batch order" setting on the batch step to set if you want the first 10 items (the default) or the last 10 items added to the batch. You can use the `activities` property in your template to access the items included in the batch. Each `activity` will include any `data` sent along with the workflow trigger that was batched. You can think about batching as a per-recipient, per-workflow summary of notifications that should be sent together. Many of our Knock customers use batching as a form of digest to reduce the number of notifications that their users receive. If you have more advance digesting needs that aren't covered by our current batching implementation, [please get in touch](mailto:support@knock.app). We're currently working on this feature! If you'd like early access, please [get in touch with us](mailto:support@knock.app?subject=Per%20recipient%20batch%20windows). When messages are generated from a batch step, the workflow trigger call data for the first (or last) 10 activities of the batch will be combined into one single entity at batch closing time. You will be able to filter messages or feed items using the `trigger_data` parameter of our API, which will filter the results to only the items whose workflow trigger call's data contain the given `trigger_data` value. This means that using the `trigger_data` parameter will only return items for which the combined workflow trigger call data of the first (or last) 10 activities contain the value used on the `trigger_data` parameter. If you are using a value for the `trigger_data` parameter which is not included in the first (or last) 10 activities of an item, then the item will be returned. To understand how the combined trigger call data will look like, let's take a look at the following example: Let's consider the case where a message was generated after a batch step with 2 batched activities closes. The first activity was generated by workflow trigger call with the following trigger data: `{page: "A"}`. The second activity was generated by a workflow trigger with the following trigger data: `{page: "B"}`. When the batch closes, the trigger data of both activities will be merged into a single object that will contain the `{page: "B"}`. If we try to filter messages or feed items using the `trigger_data` filter with value`{page: "A"}`, the message in the example won't be returned. Yes, if you use the [sliding batch window](#using-a-sliding-batch-window) option then the batch window can always be extended past its original setting. When combined with a dynamic batch window from a variable, this allows you to control exactly when a specific batch window should close. Yes, you can optionally set the [maximum activity limit](#setting-the-maximum-activity-limit) to conditionally close the batch window based on the number of items contained in the batch. We cannot guarantee the order of requests made within quick succession (< 2s) and the order they appear in the batch. If you need a guaranteed order, then you will need to enqueue requests with latency in your system. ## Branch function Learn more about the branch workflow function within Knock's notification engine. --- title: Branch function description: Learn more about the branch workflow function within Knock's notification engine. tags: [ "steps", "branch", "switch", "conditions", "conditional", "if else", "branching", ] section: Designing workflows --- The branch function allows you to execute discrete branches of logic within your workflows using our powerful [conditions builder](/concepts/conditions) to specify the criteria for when a branch should execute. You can think about the branch function in Knock as an `if/else` step, with the ability to add multiple `else if` clauses. Each branch has access to the full [workflow run scope](/concepts/conditions#condition-types) to evaluate conditions. Knock will execute the first branch whose conditions evaluate to `true`. Branching by a recipient plan type ## Adding conditions to branches Each non-default branch must have at least one condition for the branch function to be valid. Conditions are added through the conditions builder, which allows you to compose conditions via `and` or `or` boolean operators. You can build conditions for branches that contain any of the types called out in the [conditions documentation](/concepts/conditions#condition-types), including access to any messages previously generated within the workflow run. ## The default branch For each branch step, a default branch must always exist, although the default branch does not need to contain any steps. When none of the preceding branches evaluate to `true`, the default branch is executed. ## Terminating branches Each branch in a branch function can optionally terminate the workflow. This can be useful to ensure that for certain cases you don't want the workflow to continue executing. You can toggle the ability to terminate the branch by checking the "Exit the workflow at the end of the branch" under the conditions section. ## Managing branches Branches within your branch function can be: - Renamed for clarity to give a visual indicator of when the branch executes - Re-ordered to change the execution order - Deleted, removing all steps inside of the branch Note: the default branch cannot be deleted or re-ordered. } /> ## Debugging branches You can debug branch execution in the [workflow debugger](/send-notifications/debugging-workflows). During a workflow run for a workflow with branches, we'll highlight the specific branch paths that were executed to help you debug. We'll also highlight the conditions that led to why a particular branch was executed. Debugging a workflow run with a branch step ## Frequently asked questions Yes, absolutely. You can nest delays, throttles, batches, and other branch steps inside of branches as well. The maximum depth for branches is set at 5. If you have needs that go beyond this, please reach out to discuss. The maximum number is currently 10 branches, including the default, per-branch function. No, you cannot have step conditions on the branch step. ## Fetch function Learn more about the fetch workflow function within Knock's notification engine. --- title: Fetch function description: Learn more about the fetch workflow function within Knock's notification engine. tags: ["steps", "fetch", "request", "http", "functions"] section: Designing workflows --- A fetch function executes an HTTP request as a step in a workflow. Any data returned to a fetch function is merged into the original trigger `data` provided on workflow trigger and made available to all subsequent steps in the workflow. With the fetch function, you can acquire additional data for your channel step templates that may not be immediately available when you first trigger a workflow. A common case is combining a fetch function with a [batch function](/send-notifications/designing-workflows/batch-function) to retrieve trigger data for a group of activities after a batch window has closed. You can also use the fetch function to trigger side effects in your systems as Knock processes your workflow. ## Building a request As with channel steps, you use the Knock template editor to configure the shape of your request. For each fetch step, you can edit the following attributes: - **Request method** - You can select one of GET (default), POST, PUT, DELETE, or PATCH. - **URL** - A valid HTTP URL. - **Headers** - Any headers Knock should include in the request. You manage these via a key-value editor, with the key being the header name and the value being the header value. - **Query parameters** - Any query parameters to encode into the URL. You also manage these via a key-value editor. - **Request body** - When building a POST or PUT request, you can build a request body to include in the request. Knock will always encode the request body as JSON.
Using the request template editor to configure a fetch function.
Using the request template editor to configure a fetch function.
Aside from the request method selector, each of the above fields is a Liquid-compatible input. This means you can use Liquid variables and control flow to inject variable data, access Knock-controlled workflow state attributes (e.g., `recipient`), and dynamically shape the request per workflow run. See the [Knock template editor reference](/send-notifications/designing-workflows/template-editor) for a detailed guide on working with Liquid templates in Knock. ## Request execution When executing the request for a fetch function, Knock expects the following from your service: - The response to the request is one of: `200 OK`, `201 Created`, or `204 No Content`. - If the request response contains data, it's encoded as JSON and can be decoded into a map/dictionary/hash. - The response to the request takes no longer than 15 seconds for Knock to receive. ### Merging data When the response sent to Knock for a fetch function request contains JSON data, Knock will merge the decoded result into the `data` you originally passed to [the workflow trigger call](/send-notifications/triggering-workflows). Knock uses a shallow-merge strategy here where: - Data from the request overwrites the original workflow run data. - Top-level attributes are merged, and nested attributes are completely overwritten. _The merged data result from a fetch function step then becomes the global trigger data for all subsequent steps in the workflow run._ The example below illustrates how this could look in practice. ```json title="Example response data merge for fetch function steps" // Original trigger data { "foo": "bar", "metadata": { "count": 1 } } // Fetch function response data { "biz": "baz", "metadata": { "query_count": 1 } } // Merge result { "foo": "bar", "biz": "baz", "metadata": { "query_count": 1 } } ``` ### Specifying the Response Path You can specify where in the trigger data the response from the fetch step should be placed. To do so, click on "Manage Settings" from the fetch step within your workflow template editor. From there, you can specify the response path. The response path can be any string. To create nested keys within the trigger data, use dot (`.`) notation. For example, specifying `foo.bar` will place the response under the `bar` key within the `foo` object. ### Error handling Knock will automatically retry request execution for a fetch function following certain types of errors. The first retry will be delayed by 30 seconds, and the second by 60 seconds. These retryable errors are: - **Server errors** - Any `5xx` level HTTP error code. - **Request timeouts** - This is any fetch function request from Knock that does not receive a completed response within the 15 second limit. All other errors or unexpected responses are immediately fatal. These include: - Any other HTTP response code. - Some issue with the structure of the request, such as an invalid URL. - Any issue JSON-encoding a request body. - Response data that cannot be JSON-decoded as expected. After two failed retries for a retryable error or any non-retryable error, Knock will mark the fetch function step as a failure and halt your workflow run. ## Testing fetch functions As you develop, you can execute test runs of your fetch step from right within the template editor. This should look and feel similar to executing test runs of your workflows, but here Knock will execute just your fetch step, ignoring any other steps that may exist before or after. To run a fetch step test: 1. Click the button that sits to the right of the URL field in the template editor. This should open the Knock test runner modal. 2. Specify the appropriate trigger parameters (actor, recipient, trigger data, and tenant) for the test run. **NOTE:** If your fetch step expects data from a preceding batch step or fetch step, you'll need to explicitly include it here in the "Data" field. Since Knock will test this step in isolation, it cannot know what preceding data may be present when the full workflow runs. 3. Click the "Run test" button in the modal. The modal will close and the test console should display a loading state as Knock executes the test. 4. When the test run has completed, Knock will load the result into the test console for your review. You can then use use the "Request" and "Response" buttons to toggle between the two views in the test console. The "Response" section will show any data returned by the request that would be made accessible to subsequent steps in your workflow. **When running fetch step tests, Knock will not retry a failed request on any error.** For the retryable errors [outlined above](#error-handling), Knock will indicate in the test console result that they would be retried during a full workflow run. ## Debugging fetch functions You can use the [workflow run logs](/send-notifications/debugging-workflows) to debug your fetch function steps. For each fetch function, you can expect to see in the logs: - The request URL (with encoded query parameters), headers, and body as sent by Knock. - The duration of the request (in milliseconds). - The response headers and body data. In the workflow run overview, you'll also see any data that Knock successfully received from your fetch function steps and merged into your workflow run state.
Viewing log details for a successful fetch function step.
Viewing log details for a successful fetch function step.
If the request encounters an error, you can also expect to see details about the error in the logs. And finally, if the fetch function retries the request on a retryable error, you can expect to see details enumerated for each request attempt.
Viewing log details for an unsuccessful fetch function step.
Viewing log details for an unsuccessful fetch function step.
See the [guide on debugging workflows](/send-notifications/debugging-workflows) for more details about workflow debugging and run logs. ## Securing fetch requests Adding security to your fetch requests guards your endpoint from the outside world. There are currently two options to do this within Knock: using authentication headers, or adding request signing. ### Adding authentication via headers One option for adding authentication is to use a **shared secret** between Knock and your service's endpoint that you inject into the headers of the request. You can use our [secret variables](/concepts/variables#setting-secret-variables) to create and store this secret within Knock, ensuring that it can be unique per environment and also obfuscated within the dashboard across all usage. Variables can be accessed under the `vars` namespace in liquid. To add a secret into a header you use the syntax `{{ vars.your_variable_name }}` in the header value field. ### Adding request signing Another option is to enable **request signing**, which will sign the request against a signing key that Knock generates and that can be used to guarantee that the request is coming from Knock. You can enable request signing for the fetch function by going to the "Manage settings" modal in the top right corner when editing the request template. Once you enable request signing, Knock will generate a signing key that will be used to sign the request. This same key can then be used within your application to verify the request came from Knock via a signature added to the request as a `x-knock-signature` header. **Verifying the signature** The signature is generated with an HMAC using the SHA256 algorithm and, before being encoded, is comprised of the timestamp and the stringified JSON payload of the request. We encode `"timestamp in numerical form"."stringified payload"` as the signature of the request. The `x-knock-signature` header is a string comprised of the timestamp used in the encoding and the encoded value above. It will look like this: `t=timestamp,s=encoded-signature` To test that the payload sent has not been compromised, you can recreate the signature using the shared secret key and compare to the one sent in the header. 1. Split the `x-knock-signature` on the comma (",") and extract the values of timestamp and signature. 2. Construct the value of the signature by concatenating: - The timestamp (as a string) - The character `.` - The stringified JSON payload 3. Generate the signature with an HMAC and SHA256 algorithm using the signing key from the fetch function. 4. Compare your generated signature with the one extracted in step one; they should match exactly. If the timestamp is more than five minutes old compared to the current time, you may decide you want to reject the payload for additional security. ## Throttle function Learn more about the throttle workflow function within Knock's notification engine. --- title: Throttle function description: Learn more about the throttle workflow function within Knock's notification engine. tags: ["steps", "functions"] section: Designing workflows --- A throttle function allows you to limit the number of times a workflow is executed for a recipient within a given window. For example, in an alerting system, your recipients might only want to receive a single email _per hour_ for a given alert. A throttle lets you express this logic within Knock. Throttle functions are helpful when you want to control how often a workflow is executed for a recipient without needing to implement the logic within your own application layer. ## How throttling works Throttling works like a gate. When the throttle step is executed, the gate is checked; if the threshold over the window has been exceeded, then the workflow stops execution. If the threshold has not been met, then the workflow will proceed. Throttle functions have 3 pieces of configuration: 1. **A throttle window**: the length of the throttle period. 2. **A throttle threshold**: the number of invocations allowed within the window. Defaults to 1 if none provided. 3. **A throttle key** (optional): An optional value to specify as the throttle key for the workflow run. ## Setting a throttle window The throttle window determines how long a throttle is active for the recipient. The window opens the first time the throttle function is executed in a workflow run for a recipient. ### Set a fixed throttle window You can set a fixed duration throttle window using the "Throttle for a fixed window" option in the throttle step. The window accepts a relative duration, which can be specified in seconds, minutes, hours, or days. ### Set a dynamic throttle window You can also set the length of your throttle windows dynamically using a variable. You can use any of the data, recipient, actor, or environment variables associated with the workflow run to set your dynamic throttle window. When specifying a dynamic window you must provide one of the following: - An **[ISO-8601 timestamp](https://en.wikipedia.org/wiki/ISO_8601)** (e.g. `2022-05-04T20:34:07Z`) which must be a datetime in the future - A relative duration unit (e.g `{ "unit": "seconds", "value": 30 }`) - A window rule (e.g `{ "frequency": "daily", "hours": 9, "minutes": 30 }`) A dynamic interval must be available to be resolved via the `key` you specify on the given schema, meaning that if you specify a key of `throttleWindow` in your `data` schema, your workflow trigger data must contain either an ISO-8601 timestamp, a valid duration unit, or a valid window rule. When the key specified is missing or resolves to an invalid value, a corresponding error will be logged on the workflow run, and the throttle will be **skipped**. A fixed timestamp will tell Knock to close the throttle window at the exact datetime you provide. It must be a valid ISO-8601 timestamp in the future. #### An example timestamp ```json title="Setting a throttle until timestamp" { "throttleUntil": "2024-01-05T14:00:00Z" } ``` You can then reference that in your throttle step settings as `data.throttleUntil`. A duration will take the current time that the step is executing and add the duration to it to produce the throttle window close time. A duration object is an entity that you can set on recipients, tenants, environment variables, or in your data payload and reference on your throttle step. #### The duration schema ```typescript title="A relative duration" type Duration = { unit: "seconds" | "minutes" | "hours" | "days" | "weeks"; value: number; }; ``` #### An example duration Let's say you want to express a duration that throttles for 15 minutes, here's how you structure that: ```json title="Setting a duration" { "throttleDuration": { "unit": "minutes", "value": 15 } } ``` You then reference that as `data.throttleDuration` in the throttle step configuration. A window rule determines a dynamic interval for when the throttle should close. It allows you to express rules like "throttle until Monday at 9am." The window rule will always be evaluated in the [recipient's timezone](/concepts/recipients#recipient-timezones) (when set) and will fall back to the account default timezone, or "Etc/UTC". #### The window rule schema ```typescript title="A window rule" type WindowRule = { frequency: "hourly" | "daily" | "weekly" | "monthly", // The specific days the rule is valid on days?: Array<"mon" | "tue" | "wed" | "thu" | "fri" | "sat" | "sun"> | "weekdays" | "weekends", // The hour which the rule should evaluate (defaults to 0) hours?: number, // The minute at which the rule should evaluate (default to 0) minutes?: number, // What day of the month should this rule execute (useful when monthly) day_of_month?: number, // How often should this rule repeat? Defaults to 1 interval?: number }; ``` #### Example window rule Let's say you want to express setting a window rule for throttling until Monday at 9am, here's how you might structure that on your recipient: ```json title="Recipient throttle window" { "throttleWindow": { "frequency": "weekly", "days": ["mon"], "hours": 9 } } ``` Now you can set the throttle window key to `recipient.throttleWindow` to reference this window rule. ## Setting a throttle threshold The throttle threshold determines how many invocations are allowed in the window before the threshold takes effect. By default, this value is set to 1, but you can change it as needed. For example, if you want to say that you want to allow 5 invocations over a 1-minute window, then you would set the throttle threshold to 5. ## Selecting a throttle key A throttle function always runs per recipient. If you do not provide a throttle key, your throttle function will throttle for the executing step per recipient. If you do provide a throttle key, your throttle function will be evaluated for the key and executing step. A quick tip. Here's a helpful way to think about throttling. By default, the throttle function throttles on a key of recipient_id. When a throttle key is provided, it throttles on a key of concat(recipient_id, throttle_key). } /> Custom throttle keys must be shorter than 64 characters long after being JSON and URL encoded. ## Frequently asked questions Yes! A dynamic throttle window can come from a variety of dynamic sources like the recipient, the environment, or within the data payload. When a throttle is hit, the workflow will stop execution. You will be able to see this in your workflow run logs. We haven’t added this ability, but if this is something you’re looking to do, please reach out to us to discuss your use case. We’d love to hear more. A throttle is allowed to be opened for a maximum of 31 days. If you have a use case for a longer throttle window, please [get in touch](mailto:support@knock.app). Absolutely, each throttle step is executed independently in a workflow, so you can have as many as you need. Currently, you cannot throttle across Knock workflows. In the future, we will be exploring adding the ability to rate-limit the number of notifications a recipient can receive in a given window of time, which will work across workflows. Currently, you cannot extend the throttle window past 31 days. If you need to throttle a workflow to run at most once per recipient, you can consider using [workflow trigger frequency](/send-notifications/triggering-workflows#controlling-workflow-trigger-frequency) instead. ## Trigger workflow function (Beta) Learn more about the trigger workflow function within Knock's notification engine. --- title: Trigger workflow function description: Learn more about the trigger workflow function within Knock's notification engine. tags: ["steps", "functions"] section: Designing workflows --- Trigger workflow function is currently in beta. If you'd like early access, or this is blocking your adoption of Knock, please{" "} get in touch . } /> A trigger workflow function enables you to invoke a workflow from within another workflow. This function allows you to compose complex notifications by reusing logic across multiple workflows, improving maintainability and reducing duplication. When using the trigger workflow function, you can utilize the data passed directly from the parent workflow or specify custom data for use when triggering the nested workflow. ## How trigger functions work The trigger workflow step functions similarly to a standard workflow trigger, executing a specified workflow with a specified payload. The payload is constructed based on the configuration settings defined in the step. Like other functions, the trigger workflow function runs independently for each recipient in the parent workflow. This means that if your parent workflow has three recipients, the trigger function will execute three times, creating distinct workflow runs each time. This behavior ensures that each recipient's context and data is properly isolated in the nested workflow. ## Configuring a trigger function Configure trigger workflow function settings
  • Choose from any currently *active* workflows in your system
  • Recipients
  • Actor (optional)
  • Tenant (optional)
  • Data (optional)
  • Cancellation key (optional)
### Selecting the workflow You can select any active workflow for use in the trigger workflow step. The trigger function will always use the most recently committed version of the selected workflow. To ensure that the correct workflow version is triggered, you must [commit](/concepts/commits) any intended changes to the selected workflow. Any uncommitted changes to the selected workflow will not be reflected when the step is executed. If the selected workflow is later set to inactive or is archived, the trigger workflow step will be in an invalid state and the step will be skipped. ### Setting the trigger data The trigger workflow function uses strings or [Liquid](/designing-workflows/template-editor/reference-liquid-helpers) variables to define the trigger data for the nested workflow. You can reference any variables and data available in the parent workflow run. | Field | Type | Default Value | Description | | ------------------ | ------ | --------------------------------- | ------------------------------------------------------ | | `recipients` | string | `{{ recipient.id }}` | The recipient(s) who will receive the nested workflow. | | `actor` | string | `{{ actor.id }}` | The user or system initiating the nested workflow. | | `tenant` | string | `{{ tenant.id }}` | The tenant context for the nested workflow. | | `data` | string | `{{ data \| json }}` | Data payload passed to the nested workflow. | | `cancellation_key` | string | `{{ workflow.cancellation_key }}` | Unique identifier used to cancel nested workflow runs. | ### Handling Errors When configuring the trigger workflow function, you may encounter the following errors: - **Liquid Rendering Error**: This occurs when there is a syntax error in the Liquid template used for defining trigger data. Ensure that all variables and expressions are correctly formatted and available in the parent workflow context. - **Invalid Trigger Data**: If the resolved trigger data for the nested workflow is invalid, the workflow execution will fail. This can happen if required fields are missing or contain incorrect values. Double-check the data being passed to ensure it meets the expected format and requirements of the nested workflow. ## Workflow cancellation When using trigger workflow functions, both parent and nested workflows can be canceled if they contain cancelable steps (batch, delay, or fetch functions) and are configured with cancellation keys. If the parent workflow is canceled before the trigger workflow step executes, the nested workflow will not be triggered, so no separate cancellation is needed. If you need to cancel a nested workflow that has already been triggered, you can do so by making a separate cancellation request using the cancellation key configured in the trigger workflow step. Canceling the parent workflow after the trigger workflow step has executed will not automatically cancel the nested workflow - you'll need to cancel each workflow separately. ## Step conditions Learn more about how to use step conditions within the Knock workflow builder. --- title: Step conditions description: Learn more about how to use step conditions within the Knock workflow builder. tags: [ "triggers", "conditions", "conditionals", "steps", "routing", "conditional send", ] section: Designing workflows --- Step conditions allow you to apply control flow to your workflow runs on a per-step basis. You can use the [Knock conditions editor](/concepts/conditions#the-conditions-editor) to associate one or more conditions with any step in your workflow. Then, for each workflow run, Knock will evaluate these conditions to determine if the step should execute. Some examples of the kinds of step conditions you can design include: - Only execute a workflow if `shouldExecute == true`. - Only send an email if an in-app notification was not previously read or seen. - Only send an in-app notification if the `recipient.plan == "pro"`. - Only execute a delay step if `delay == true` in the workflow trigger. - Only send an email in your development environment if the recipient's email matches a particular domain. See our [guide on the Knock conditions model](/concepts/conditions) for more information about how conditions work across Knock and how to [debug your conditions within your workflow runs](/concepts/conditions#debugging-conditions). In this guide, we cover features specific to step conditions, most importantly message status conditions. ## Types of step conditions ### Trigger step conditions A [trigger step](/designing-workflows/overview#the-trigger-step) can have one or more step conditions, which will be evaluated on the trigger of the workflow for the recipient. When the conditions evaluate to false then the workflow **will be halted** and no other steps will be executed. ### Other step conditions For all function and channel steps, step conditions will be evaluated when the step is executed. If the conditions on the step evaluate to false, then the step will be **skipped** and the subsequent step will be invoked, or the workflow will terminate if there are no other steps to execute. ## Message status conditions Message status conditions allow you to build a check for one workflow step that evaluates against the [delivery or engagement status](/send-notifications/message-statuses) of a message sent from a preceding step. When building a step message status condition, you'll use the conditions editor to select: - Any preceding channel step that may produce a message, using it's `ref`. - An asserting (`"has"`) or negating (`"has not"`) condition operator. - The expected delivery or engagement status case. ### Status cases Available statuses cases will vary.{" "} While you can reference any preceding channel step in a message status condition, you will be presented with a different set of options depending on the case (asserting or negating) and the target step's channel type. In-app feed channel steps support certain engagement status options ("seen but not read") that others do not. The "read" and "link clicked" status conditions often require that{" "} Knock tracking has been enabled. } />
Case Limits Description
skipped - The target step was skipped and did not generate a message.
failed delivery - The message failed to deliver and Knock has exhausted all retries.
bounced - The message was successfully sent to the delivery provider but failed to send due to a bounce.
sent - The message has been successfully sent to the delivery provider.
delivered - The message has been successfully sent to the delivery provider, and Knock has confirmed delivery to your recipient.
seen In-app channels only The message has been rendered in the feed.
seen but not read In-app channels only The message has been rendered in the feed, but not yet marked as read by your recipient.
read In-app channel or Knock open tracking required The message has been marked as read.
read but not clicked Knock link tracking required The message has been marked as read but no links have been clicked.
interacted with In-app channels only The recipient has clicked on the message.
link clicked - The recipient has clicked at least one link in the message.
archived - The message has been archived.
### Evaluation timing Knock evaluates message status conditions, like all conditions, immediately when executing a workflow step. This means that you may need to account for time between steps when building these conditions, especially those that require some amount of recipient engagement or delivery confirmation. [See below](#example-conditionally-sending-an-email-if-an-in-app-notification-was-not-seen) for an example of using a delay step for this purpose. ### Multiple messages In certain cases, such as when using a [channel group](/integrations/overview#channel-groups), a single channel step can produce multiple messages. In these cases, Knock uses the message with the **highest** status for the condition evaluation. To determine each message's highest status, Knock looks at both its [delivery status](/send-notifications/message-statuses#delivery-status) plus each of its [engagement statuses](/send-notifications/message-statuses#engagement-status), choosing the highest value status from the group. Knock uses the following combined delivery and engagement status hierarchy (ordered from lowest to highest): - `undelivered` - `bounced` - `delivery_attempted` - `queued` - `not_sent` - `sent` - `delivered` - `seen` - `read` - `interacted` + `link_clicked` - `archived` ## Example: conditionally sending an email if an in-app notification was not seen One common use-case for step conditions is conditionally sending a notification based on whether the recipient has seen a preceding notification delivered on another channel. You can think of this concept as channel escalation, or intelligent routing. In order to implement this, your workflow will need: - An in-app notification channel step to send the initial message - A delay step so that we wait a period of time before executing the email step - An email channel step to send the escalated message Next, we'll add a condition to our email channel step that will tell Knock to only send the email if the in-app notification has not yet been seen. To do this you will: 1. Select a "Step message status" condition type. 2. Select the `ref` of the in-app step (by default named `in_app_feed_1`). 3. Select the negating "has not" operator. 4. And finally, select the "been seen" status case option.
Setting a condition on an email step that passes when the message produced by the preceding in-app step has not been seen after a 5-minute delay.
Setting a condition on an email step that passes when the message produced by the preceding in-app step has not been seen after a 5-minute delay.
That's all it takes to build intelligent message routing in Knock! ## Advanced: How Knock models status conditions The message status condition editor provides some useful abstractions on top of Knock's [conditions model](/concepts/conditions#modeling-conditions). Under the hood, Knock stores each status condition using our standard `variable`, `operator`, and `argument` trio, with some special caveats: - The `variable` will always be either `refs..delivery_status` or `refs..engagement_status`. - The `operator` will be a hierarchical comparison operator for a delivery status condition or an inclusionary operator for an engagement status condition. - The `argument` will be a reserved status case string. Below we provide example models for each of the status conditions made available in the editor. #### Skipped cases ```json title="'has been skipped' case" { "variable": "refs.email_1.delivery_status", "operator": "equal_to", "argument": "$message.skipped" } ```
```json title="'has not been skipped' case" { "variable": "refs.email_1.delivery_status", "operator": "not_equal_to", "argument": "$message.skipped" } ``` #### Failed delivery cases ```json title="'has failed delivery' case" { "variable": "refs.email_1.delivery_status", "operator": "equal_to", "argument": "$message.undelivered" } ``` #### Bounced cases ```json title="'has bounced' case" { "variable": "refs.email_1.delivery_status", "operator": "equal_to", "argument": "$message.bounced" } ``` #### Sent cases ```json title="'has been sent' case" { "variable": "refs.email_1.delivery_status", "operator": "greater_than_or_equal_to", "argument": "$message.sent" } ```
```json title="'has not been sent' case" { "variable": "refs.email_1.delivery_status", "operator": "less_than", "argument": "$message.sent" } ``` #### Delivered cases ```json title="'has been delivered' case" { "variable": "refs.email_1.delivery_status", "operator": "greater_than_or_equal_to", "argument": "$message.delivered" } ```
```json title="'has not been delivered' case" { "variable": "refs.email_1.delivery_status", "operator": "less_than", "argument": "$message.delivered" } ``` #### Seen cases ```json title="'has been seen' case" { "variable": "refs.email_1.engagement_status", "operator": "contains", "argument": "$message.seen" } ```
```json title="'has been seen but not read' case" { "variable": "refs.email_1.engagement_status", "operator": "contains", "argument": "$message.seen_not_read" } ```
```json title="'has not been seen' case" { "variable": "refs.email_1.engagement_status", "operator": "not_contains", "argument": "$message.seen" } ``` #### Read cases ```json title="'has been read' case" { "variable": "refs.email_1.engagement_status", "operator": "contains", "argument": "$message.read" } ```
```json title="'has been read but not clicked' case" { "variable": "refs.email_1.engagement_status", "operator": "contains", "argument": "$message.read_not_link_clicked" } ```
```json title="'has not been read' case" { "variable": "refs.email_1.engagement_status", "operator": "not_contains", "argument": "$message.read" } ``` #### Interacted cases ```json title="'has been interacted with' case" { "variable": "refs.email_1.engagement_status", "operator": "contains", "argument": "$message.interacted" } ```
```json title="'has not been interacted with' case" { "variable": "refs.email_1.engagement_status", "operator": "not_contains", "argument": "$message.interacted" } ``` #### Link clicked cases ```json title="'has had a link clicked' case" { "variable": "refs.email_1.engagement_status", "operator": "contains", "argument": "$message.link_clicked" } ```
```json title="'has not had a link clicked' case" { "variable": "refs.email_1.engagement_status", "operator": "not_contains", "argument": "$message.link_clicked" } ``` #### Archived cases ```json title="'has been archived' case" { "variable": "refs.email_1.engagement_status", "operator": "contains", "argument": "$message.archived" } ```
```json title="'has not been archived' case" { "variable": "refs.email_1.engagement_status", "operator": "not_contains", "argument": "$message.archived" } ``` ## Channel steps Learn more about channel steps within Knock's notification engine. --- title: Channel steps description: Learn more about channel steps within Knock's notification engine. tags: ["steps", "channels", "functions"] section: Designing workflows --- A channel step within a workflow is the building block to produce a notification for a recipient. Channel steps house your notification templates and represent a notification to be delivered on a single channel type (e.g. email, push, SMS, in-app, etc). For a channel step to be valid it must have a [channel or channel group](/integrations/overview#channel-specific-features) associated with it. ## Channel step execution When a channel step is executed Knock does the following: 1. Runs through any [step conditions](/designing-workflows/step-conditions) to see if the step should be executed. 2. Checks the recipient has the information required to send notifications via this channel. (e.g. for an email channel, do they have an `email` address set? For a push channel do they have the [required channel data](/send-notifications/setting-channel-data) configured?) 3. Checks the [recipient's preferences](/preferences/overview) to see if they have opted out from receiving notifications on this channel or from this workflow. 4. Checks the channel's [send windows](/designing-workflows/send-windows) to see if the notification should be sent now or at a later time. If the step continues, Knock will render [the template](/designing-workflows/template-editor) associated with the step and enqueue a message to [deliver to the provider](/send-notifications/delivering-notifications) via the configured credentials on the channel. ## Channel support You can read more about configuring channels in our [integrations guide](/integrations/overview). ### In-app notifications The Knock [Feed API](/reference#feeds) gives developers a way to deliver in-app notifications to feeds, inboxes, and other notification-based experiences. There are a few ways to power in-app notifications in your product using Knock: - **Use our [React SDK](https://github.com/knocklabs/javascript/tree/main/packages/react).** The Knock notification feed component provides real-time updates, pagination, badge behavior, filtering, and more. It's a great way to quickly add an in-app feed to your product if you use React. - **Leverage our [client-side JS SDK](https://github.com/knocklabs/javascript/tree/main/packages/client).** This is a good approach if you need to use a component library outside of React JS but are still in the JS ecosystem. - **Integrate with our [API directly](/reference#feeds).** If you're not working within the JS ecosystem in your client, you can integrate directly with the Knock Feed API to power your in-app notifications. ### Out-of-app channels We support notification delivery to the following out-of-app channel types: [email](/integrations/email/overview), [push](/integrations/push/overview), [SMS](/integrations/sms/overview), and 3rd-party [chat apps](/integrations/chat/overview) (such as Slack). You can see a list of which providers we support within each channel type in the **Integrations** > **Channels** section of the Knock dashboard. ## Send windows Learn how to control when notifications are delivered using send windows. --- title: Send windows description: Learn how to control when notifications are delivered using send windows. tags: ["send windows", "steps", "channels", "workflows"] section: Designing workflows --- You can use send windows to specify when a channel step should send a message. For example, if you want to ensure your customers don’t receive a given transactional email from your product outside of working hours, you can set send windows for Monday - Friday, between 9:00 a.m. and 6:00 p.m. local user time. Messages generated outside of this window will be [queued](https://docs.knock.app/send-notifications/message-statuses#3-queued) until the next open window, at which time Knock will resume delivery to the downstream provider. Send windows are evaluated using the recipient's local time, specified by the user `timezone` [property](/concepts/users#optional-attributes). If the user's timezone is not set, the [account default timezone](/manage-your-account/account-timezone) will be used. ## Modeling send windows Knock models send windows as a list of send window objects. Each day must have 1 send window specified. The send window object has the following properties: | Property | Description | | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | | `day` | Day of the week. One of: ”monday”, ”tuesday”, ”wednesday”, ”thursday”, ”friday”, ”saturday”, ”sunday”. | | `type` | Whether notifications should be sent or not sent for this send window. One of: ”send”, ”do_not_send”. | | `from` | An optional ISO-8601 time-only format string specifying the start of the window (defaults to 00:00:00). Only supported if type is set to ”send”. | | `until` | An optional ISO-8601 time-only format string specifying the end of the window (defaults to end of day). Only supported if type is set to ”send”. | In our JSON representation this will look something like: ```json title="Example send window" { "day": "monday", "type": "send", "from": "09:00:00", "until": "17:00:00" } ``` ## The send windows editor When creating or modifying a channel step, you can use the send window editor to configure send windows. If notifications are enabled for a given day of the week, you can also specify the time range during which messages will be sent on that day.
Send windows editor
Send windows editor
## Partials Learn how to create reusable pieces of content using partials. --- title: Partials description: Learn how to create reusable pieces of content using partials. tags: ["partials", "templates", "custom blocks", "message templates", "workflows"] section: Designing workflows --- Partials are reusable pieces of content you can use across any of your channel templates. HTML partials can be enabled as "blocks" for use in Knock’s drag-and-drop email editor. In this page, we'll walk through how to create partials and use them in your templates using Knock's code editor or visual editor. ## Managing partials ### Creating and editing partials To get started, navigate to the "Partials" page under the "Developers" section of the main sidebar where you can create a new partial. When creating or editing partials, you can use the following properties: | Property | Description | | ------------- | ----------------------------------------------------------------------------------------------- | | `Name` | A name for your partial. | | `Key` | A unique key for your partial. | | `Type` | The type of content you want to create. This can be HTML, markdown, plaintext, or JSON. | | `Description` | An optional description of your partial. | | `Is block` | Whether or not to enable this partial as a block within the visual editor (HTML partials only). | | `Icon name` | An icon to display for this partial within the visual editor. | Partials are environment-specific and follow the same version control model as the rest of Knock. ### Editing partial content After creating a partial, you can edit its content in the code editor. You can include Liquid variables in your content which will be scoped to your partial. When using the partial in a template, you can pass in values for these variables. To include a variable in your partial, use the following syntax: `{{ variable_name }}`. Note: Partials must be committed before they can be used by templates in a given environment. Templates will always use the published version of a partial.

If you're using a partial in a template that is not yet committed or has unpublished changes, you will not see your latest changes. } /> #### Editing HTML partials HTML partials display a preview alongside the editor. Open the preview by clicking the "Preview" button or using the `Cmd + ]` keyboard shortcut on Mac, or `Ctrl + ]` on Windows. - Select an [email layout](/integrations/email/layouts) to preview the partial within. - Use the `