# Design
URL: /docs/design
Design overview for Pothos
***
title: Design
description: Design overview for Pothos
---------------------------------------
## Type System
The type system that powers most of the Pothos type checking has 2 components. The first is the
SchemaTypes type param passed into the SchemaBuilder. This allows a shared set of types to be reused
throughout the schema, and is responsible for providing type information for shared types like the
[Context](./guide/context) object, and any Object, Interface, or Scalar types that you want to
reference by name (as a string). Having all type information in a single object can be convenient
at times, but with large schemas, can become unwieldy.
To support a number of additional use cases, including Unions and Enums, large schemas, and plugins
that use extract type information from other sources (eg the Prisma, or the simple-objects
plugin), Pothos has another way of passing around type information. This system is based in `Ref`
objects that contain the type information it represents. Every builder method for creating a type or
a field returns a `Ref` object.
Using Ref objects allows us to separate the type information from the implementation, and allows for
a more modular design.
# Overview
URL: /docs
Pothos - A plugin based GraphQL schema builder for typescript
***
title: Overview
description: Pothos - A plugin based GraphQL schema builder for typescript
--------------------------------------------------------------------------

Pothos is a plugin based GraphQL schema builder for typescript.
It makes building graphql schemas in typescript easy, fast and enjoyable. The core of Pothos adds 0
overhead at runtime, and has `graphql` as its only dependency.
Pothos is the most type-safe way to build GraphQL schemas in typescript, and by leveraging type
inference and typescript's powerful type system Pothos requires very few manual type definitions and
no code generation.
Pothos has a unique and powerful plugin system that makes every plugin feel like its features are
built into the core library. Plugins can extend almost any part of the API by adding new options or
methods that can take full advantage of the Pothos type system.
## Hello, World
```typescript
import { createYoga } from 'graphql-yoga';
import { createServer } from 'node:http';
import SchemaBuilder from '@pothos/core';
const builder = new SchemaBuilder({});
builder.queryType({
fields: (t) => ({
hello: t.string({
args: {
name: t.arg.string(),
},
resolve: (parent, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
const yoga = createYoga({
schema: builder.toSchema(),
});
const server = createServer(yoga);
server.listen(3000, () => {
console.log('Visit http://localhost:3000/graphql');
});
```
## What sets Pothos apart
* Pothos was built from the start to leverage typescript for best-in-class type-safety.
* Pothos has a clear separation between the shape of your external GraphQL API, and the internal
representation of your data.
* Pothos comes with a large plugin ecosystem that provides a wide variety of features while
maintaining great interoperability between plugins.
* Pothos does not depend on code-generation or experimental decorators for type-safety.
* Pothos has been designed to work at every scale from small prototypes to huge Enterprise
applications, and is in use at some of the largest tech companies including Airbnb and Netflix.
## Plugins that make Pothos even better
# LLM Integration
URL: /docs/llms
AI-ready routes for LLMs to understand your documentation
***
title: LLM Integration
description: AI-ready routes for LLMs to understand your documentation
----------------------------------------------------------------------
## Overview
This documentation site provides special routes designed for Large Language Models (LLMs) to better understand and interact with the Pothos documentation.
## Available Routes
### Full Documentation
#### [`/llms-full.txt`](/llms-full.txt)
A plain text representation of the entire documentation, optimized for LLM consumption. This route concatenates all documentation content into a single, easily parseable text file that includes:
* Page titles and URLs
* Page descriptions
* Full content of each documentation page
### Individual Pages
#### `/docs/[path].mdx`
Get the MDX content for any individual documentation page in an LLM-friendly format. Simply append `.mdx` to any docs path to retrieve that page's content.
in a clean, parseable format
# Resources
URL: /docs/resources
External guides, tools, and libraries created by members of the Pothos community.
***
title: Resources
description: External guides, tools, and libraries created by members of the Pothos community.
----------------------------------------------------------------------------------------------
## Guides and Tutorials
* [End-To-End Type-Safety with GraphQL, Prisma & React: GraphQL API](https://www.prisma.io/blog/e2e-type-safety-graphql-react-3-fbV2ZVIGWg#start-up-a-graphql-server)
by [Sabin Adams](https://twitter.com/sabinthedev)
* [Code-first GraphQL with Pothos](https://graphql.wtf/episodes/60-code-first-graphql-with-pothos)
by [Jamie Barton](https://twitter.com/notrab)
* [How to Build a Type-safe GraphQL API using Pothos and Kysely](https://dev.to/franciscomendes10866/how-to-build-a-type-safe-graphql-api-using-pothos-and-kysely-4ja3)
by [Francisco Mendes](https://github.com/FranciscoMendes10866)
* [Type-safe GraphQL Server with Pothos](https://omkarkulkarni.hashnode.dev/type-safe-graphql-server-with-pothos-formerly-giraphql)
by [Omkar Kulkarni](https://twitter.com/omkar_k45)
* [Build a GraphQL server running on Cloudflare Workers](https://the-guild.dev/blog/graphql-yoga-worker)
by [Rito Tamata](https://twitter.com/chimame_rt)
## 3rd party Tools and Libraries
* [Prisma Generator Pothos Codegen](https://github.com/Cauen/prisma-generator-pothos-codegen) by
[Emanuel](https://twitter.com/cauenor)
* [Nexus to Pothos codemod](https://github.com/villesau/nexus-to-pothos-codemod) by
[Ville Saukkonen](https://twitter.com/SaukkonenVille)
* [protoc-gen-pothos](https://github.com/proto-graphql/proto-graphql-js/tree/main/packages/protoc-gen-pothos)
by [Masayuki Izumi](https://twitter.com/izumin5210)
* [@smatch-corp/nestjs-pothos](https://github.com/smatch-corp/nestjs-pothos) by
[Chanhee Lee](https://github.com/iamchanii)
* [pothos-protoc-gen](https://iamchanii.github.io/pothos-protoc-gen/) by [Chanhee Lee](https://github.com/iamchani)
* [rumble](https://github.com/m1212e/rumble) (GraphQL + Drizzle + Abilities ) by [m1212e](https://github.com/m1212e) [(introduction)](https://github.com/hayes/pothos/discussions/1414)
## Templates and Examples
* [Pothos GraphQL Server](https://github.com/theogravity/graphql-pothos-server-example) by
[Theo Gravity](https://github.com/theogravity)
* [GraphQL countries server ](https://github.com/gbicou/countries-server) by
[Benjamin VIELLARD](https://github.com/gbicou)
* [datalake-graphql-wrapper](https://github.com/dbsystel/datalake-graphql-wrapper) by
[noxify](https://github.com/noxify)
## Conference talks
* [Pothos + Prisma: delightful, type-safe and efficient GraphQL](https://www.youtube.com/watch?v=LqKPfMmxFxw)
by [Michael Hayes](https://twitter.com/yavascript)
## Paid tools
* [Bedrock](https://bedrock.mxstbr.com/) by [Max Stoiber](https://twitter.com/mxstbr)
* [nytro](https://www.nytro.dev/) by [Jordan Gensler](https://twitter.com/vapejuicejordan)
# Sponsors
URL: /docs/sponsors
The generous people supporting Pothos development
***
title: Sponsors
description: The generous people supporting Pothos development
--------------------------------------------------------------
Pothos development supported by [sponsorships](https://github.com/sponsors/hayes) from these
generous people and organizations:
*
*
*
*
* [@saevarb](https://github.com/saevarb)
* [@seanaye](https://github.com/seanaye)
* [@arimgibson](https://github.com/arimgibson)
* [@ccfiel](https://github.com/ccfiel)
* [@JoviDeCroock](https://github.com/JoviDeCroock)
* [@hellopivot](https://github.com/hellopivot)
* [@robmcguinness](https://github.com/robmcguinness)
* [@Gomah](https://github.com/Gomah)
*
* [@garth](https://github.com/garth)
* [@lifedup](https://github.com/lifedup)
* [@skworden](https://github.com/skworden)
* [@jacobgmathew](https://github.com/jacobgmathew)
* [@aniravi24](https://github.com/aniravi24)
* [@mizdra](https://github.com/mizdra)
* [@3nk1du](https://github.com/3nk1du)
* [@FarazPatankar](https://github.com/FarazPatankar)
* [@noxify](https://github.com/noxify)
* [@matthawk60](https://github.com/matthawk60)
* [@BitPhinix](https://github.com/BitPhinix)
* [@nathanchapman](https://github.com/nathanchapman)
* [@pradyuman](https://github.com/pradyuman)
* [@tmm](https://github.com/tmm)
# ArgBuilder
URL: /docs/api/arg-builder
API docs for Pothos ArgBuilder
***
title: ArgBuilder
description: API docs for Pothos ArgBuilder
-------------------------------------------
## `arg(options)`
* `options`: \[`FieldOptions`]
### FieldOptions
```typescript
type FieldOptions = {
type: ReturnType;
required?: boolean;
description?: string;
deprecationReason?: string;
};
```
* `type`: [Type Parameter](./arg-builder#type-parameter)
* `required`: boolean, defaults to `false`, unless overwritten in SchemaBuilder see
[Changing Default Nullability](../guide/changing-default-nullability).
* `description`: string
* `defaultValue`: default value for field, type based on `type` option.
### Type Parameter
A Type Parameter for a Field can be any `InputTypeRef` returned by one of the
[`SchemaBuilder`](./schema-builder) methods for defining an `InputObject`, `Enum`, or `Scalar`, a ts
enum used to define a graphql enum type, or a string that corresponds to one of they keys of the
`Scalars` object defined in `SchemaTypes`.
## helpers
A set of helpers for creating scalar fields. This work the same as ArgBuilder, but omit the `type`
field from options.
### Scalars
* `arg.string(options)`
* `arg.id(options)`
* `arg.boolean(options)`
* `arg.int(options)`
* `arg.float(options)`
* `arg.stringList(options)`
* `arg.idList(options)`
* `arg.booleanList(options)`
* `arg.intList(options)`
* `arg.floatList(options)`
* `arg.listRef(type, options)`
# FieldBuilder
URL: /docs/api/field-builder
API docs for Pothos FieldBuilder
***
title: FieldBuilder
description: API docs for Pothos FieldBuilder
---------------------------------------------
## `field(options)`
* `options`: `FieldOptions`
### FieldOptions
```typescript
type FieldOptions = {
type: ReturnType;
args?: Args;
nullable?: boolean;
description?: string;
deprecationReason?: string;
resolve: (parent, args, context, info): ResolveValue;
};
```
* `type`: [Type Parameter](./field-builder#type-parameter)
* `args`: a map of arg name to arg values. Arg values can be created using an
[`InputFieldBuilder`](./input-field-builder)
(`fieldBuilder.arg`) or using `schemaBuilder.args`
* `nullable`: boolean, defaults to `true`, unless overwritten in SchemaBuilder see
[Changing Default Nullability](../guide/changing-default-nullability).
* `description`: string
* `deprecationReason`: string
* `resolve`: [Resolver](./field-builder#resolver)
### Type Parameter
A Type Parameter for a Field can be any `TypeRef` returned by one of the
[`SchemaBuilder`](./schema-builder) methods for defining a type, a class used to create an object or
interface type, a ts enum used to define a graphql enum type, or a string that corresponds to one of
they keys of the `Objects`, `Interfaces`, or `Scalars` objects defined in `SchemaTypes`.
For List fields, the Type Parameter should be one of the above wrapped in an array eg `['User']`.
### Resolver
A function to resolve the value of this field.
#### Return type
Field resolvers should return a value (or promise) that matches the expected type for this field.
For `Scalars`, `Objects`, and `Interfaces` this type is the corresponding type defined
`SchemaTypes`. For Unions, the type may be any of the corresponding shapes of members of the union.
For Enums, the value is dependent on the implementation of the enum. See `Enum` guide for more
details.
#### Args
* `parent`: Parent will be a value of the backing model for the current type specified in
`SchemaTypes`.
* `args`: an object matching the shape of the args option for the current field
* `context`: The type `Context` type defined in `SchemaTypes`.
* `info`: a GraphQLResolveInfo object see
[https://graphql.org/graphql-js/type/#graphqlobjecttype](https://graphql.org/graphql-js/type/#graphqlobjecttype)
for more details.
## helpers
A set of helpers for creating scalar fields. This work the same as
[`field`](./field-builder#fieldoptions), but omit the `type` field from options.
### Scalars
* `string(options)`
* `id(options)`
* `boolean(options)`
* `int(options)`
* `float(options)`
* `stringList(options)`
* `idList(options)`
* `booleanList(options)`
* `intList(options)`
* `floatList(options)`
* `listRef(type, options)`
### expose
A set of helpers to expose fields from the backing model. The `name` arg can be any field from the
backing model that matches the type being exposed. Options are the same as
[`field`](./field-builder#fieldoptions), but `type` and `resolve` are omitted.
* `exposeString(name, options)`
* `exposeID(name, options)`
* `exposeBoolean(name, options)`
* `exposeInt(name, options)`
* `exposeFloat(name, options)`
* `exposeStringList(name, options)`
* `exposeIDList(name, options)`
* `exposeBooleanList(name, options)`
* `exposeIntList(name, options)`
* `exposeFloatList(name, options)`
# InputFieldBuilder
URL: /docs/api/input-field-builder
API docs for Pothos InputFieldBuilder
***
title: InputFieldBuilder
description: API docs for Pothos InputFieldBuilder
--------------------------------------------------
## `field(options)`
* `options`: \[`FieldOptions`]
### FieldOptions
```typescript
type FieldOptions = {
type: ReturnType;
required?: boolean;
description?: string;
deprecationReason?: string;
};
```
* `type`: [Type Parameter](./input-field-builder#type-parameter)
* `required`: boolean, defaults to `false`, unless overwritten in SchemaBuilder. See
[Changing Default Nullability](../guide/changing-default-nullability).
* `description`: string
* `defaultValue`: default value for field, type based on `type` option.
### Type Parameter
A Type Parameter for a Field can be any `InputTypeRef` returned by one of the
[`SchemaBuilder`](./schema-builder) methods for defining an `InputObject`, `Enum`, or `Scalar`, a ts
enum used to define a graphql enum type, or a string that corresponds to one of they keys of the
`Scalars` object defined in [`SchemaTypes`](./schema-builder#schematypes).
## helpers
A set of helpers for creating scalar fields. This work the same as `field`, but omit the `type`
field from options.
### Scalars
* `string(options)`
* `id(options)`
* `boolean(options)`
* `int(options)`
* `float(options)`
* `stringList(options)`
* `idList(options)`
* `booleanList(options)`
* `intList(options)`
* `floatList(options)`
* `listRef(type, options)`
# SchemaBuilder
URL: /docs/api/schema-builder
API docs for Pothos SchemaBuilder
***
title: SchemaBuilder
menu: API
description: API docs for Pothos SchemaBuilder
----------------------------------------------
SchemaBuilder is the core class of Pothos. It can be used to build types, and merge them into a
graphql.js Schema.
## `constructor(options)`
* typeParam: `SchemaTypes`: A type that describes the backing models for your schema
* options: `SchemaBuilderOptions`
### `SchemaTypes`
```typescript
type SchemaTypes {
// Shape of the context arg in your resolvers
Context?: object;
// A map of Object type names to their backing models.
Objects?: object;
// A map of Input type names to their backing models.
Inputs?: object;
// A map of Interface type names to their backing models.
Interfaces?: object;
// Map of scalar names to Input and Output shapes. Can be used to overwrite default scalar types,
// or to add type information for custom scalars
Scalars?: {
[s: string]: {
Input: unknown;
Output: unknown;
};
};
// When set to false, fields will be NonNullable by default (requires corresponding change in builder options)
DefaultFieldNullability?: false;
// When provided, input fields and arguments will be required by default (requires corresponding change in builder options)
DefaultInputFieldRequiredness?: true;
}
```
### SchemaBuilderOptions
```typescript
type SchemaBuilderOptions = {};
```
By default there are no options for SchemaBuilder, but plugins may contribute additional options
## `queryType(options, fields?)`
creates the `Query` with a set of Query fields
* `options`: `QueryTypeOptions`
* `fields?`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
### QueryTypeOptions
```typescript
type QueryTypeOptions = {
description?: string;
fields: FieldsFunction;
};
```
* `description`: A description of the current type
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `queryFields(fields)`
add a set of fields to the `Query` type.
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `queryField(name, field)`
add a single field to the `Query` type.
* `name`: the name of the field
* `field`: a function that receives a [`FieldBuilder`](./field-builder), and returns field ref. See
[`FieldBuilder`](./field-builder) for more details.
## `mutationType(options, fields?)`
creates the `Mutation` with a set of Mutation fields
* `options`: `MutationTypeOptions`
* `fields?`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
### MutationTypeOptions
```typescript
type MutationTypeOptions = {
description?: string;
fields: FieldsFunction;
};
```
* `description`: A description of the current type
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `mutationFields(fields)`
add a set of fields to the `Mutation` type.
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `mutationField(name, field)`
add a single field to the `Mutation` type.
* `name`: the name of the field
* `field`: a function that receives a [`FieldBuilder`](./field-builder), and returns field ref. See
[`FieldBuilder`](./field-builder) for more details.
## `subscriptionType(options, fields?)`
creates the `Subscription` with a set of Subscription fields
* `options`: `SubscriptionTypeOptions`
* `fields?`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
### SubscriptionTypeOptions
```typescript
type SubscriptionTypeOptions = {
description?: string;
fields: FieldsFunction;
};
```
* `description`: A description of the current type
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `subscriptionFields(fields)`
add a set of fields to the `Subscription` type.
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `subscriptionField(name, field)`
add a single field to the `Subscription` type.
* `name`: the name of the field
* `field`: a function that receives a [`FieldBuilder`](./field-builder), and returns field ref. See
[`FieldBuilder`](./field-builder) for more details.
## `objectType(param, options, fields?)`
* `param`: A key of the `Objects` property in `SchemaTypes`, a class, or a TypeRef created by
`builder.objectRef`
* `options`: `ObjectTypeOptions`
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
### `ObjectTypeOptions`
```typescript
type ObjectTypeOptions = {
description?: string;
fields: FieldsFunction;
interfaces?: Interfaces;
isTypeOf: (obj: InterfaceShape) => boolean;
name?: string;
};
```
* `description`: A description of the current type
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
* `isTypeOf`: Recomended when implementing interfaces. This is a method that will be used when
determining if a value of an implemented interface is of the current type.
* `interfaces`: an array of interfaces implemented by this interface type. Items in this array
should be an interface param. See `param` argument of `interfaceType`
* `name`: name of GraphQL type. Required when param is a class
## `objectFields(param, fields)`
add a set of fields to the object type.
* `param`: A key of the `Objects` property in `SchemaTypes`, a class, or a TypeRef created by
`builder.objectRef`
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `objectField(param, name, field)`
add a single field to the object type.
* `name`: the name of the field
* `param`: A key of the `Objects` property in `SchemaTypes`, a class, or a TypeRef created by
`builder.objectRef`
* `field`: a function that receives a [`FieldBuilder`](./field-builder), and returns field ref. See
[`FieldBuilder`](./field-builder) for more details.
## `objectRef(name)`
Creates a Ref object represent an object that has not been implemented. This can be useful for
building certain types of plugins, or when building a modular schema where you don't want to define
all types in SchemaTypes, or import the actual implementation of each object type you use.
* `name`: string, name of the type that this ref represents. Can be overwritten when implemented.
* `T`: a type param to define the backing shape for the type that this ref represents
## `interfaceType(param, options, fields?)`
* `param`: A key of the `Interfaces` property in `SchemaTypes`, a class, or a TypeRef created by
`builder.interfaceRef`
* `options`: `InterfaceTypeOptions`
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
### `InterfaceTypeOptions`
```typescript
type InterfaceTypeOptions = {
description?: string;
fields: FieldsFunction;
interfaces?: Interfaces;
name?: string;
};
```
* `description`: A description of the current type
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
* `interfaces`: an array of interfaces implemented by this interface type. Items in this array
should be an interface param. See `param` argument of `interfaceType`
* `name`: name of GraphQL type. Required when param is a class
## `interfaceFields(param, fields)`
add a set of fields to the interface type.
* `param`: A key of the `Interfaces` property in `SchemaTypes`, a class, or a TypeRef created by
`builder.interfaceRef`
* `fields`: a function that receives a [`FieldBuilder`](./field-builder), and returns an object of
field names to field refs. See [`FieldBuilder`](./field-builder) for more details.
## `interfaceField(paran, name, field)`
add a single field to the interface type.
* `param`: A key of the `Interfaces` property in `SchemaTypes`, a class, or a TypeRef created by
`builder.interfaceRef`
* `name`: the name of the field
* `field`: a function that receives a [`FieldBuilder`](./field-builder), and returns field ref. See
[`FieldBuilder`](./field-builder) for more details.
## `interfaceRef(name)`
Creates a Ref object represent an interface that has not been implemented. This can be useful for
building certain types of plugins, or when building a modular schema where you don't want to define
all types in SchemaTypes, or import the actual implementation of each interface type you use.
* `name`: string, name of the type that this ref represents. Can be overwritten when implemented.
* `T`: a type param to define the backing shape for the type that this ref represents
## `unionType(name, options)`
* `name`: A string
* `options`: `UnionTypeOptions`
### `UnionTypeOptions`
```typescript
type UnionTypeOptions = {
description?: string;
types: Member[] | (() => Member[]);
resolveType: (parent: UnionShape, context) => MaybePromise;
};
```
* `description`: A description of the current type
* `types`: an array of object types included in the union type. Items in this array should be Object
params. See `param` argument in `builder.objectType`.
* `resolveType`: A function called when resolving the type of a union value. `parent` will be a
union of the backing models of the types provided in `types`. This function should return the name
of one of the union member types.
## `enumType(param, options)`
* `param`: A string name of the enum or a typescript enum
* `options`: `EnumTypeOptions`
### `EnumTypeOptions`
```typescript
type UnionTypeOptions = {
description?: string;
values?: Values;
name?: string;
};
```
* `description`: A description of the current type
* `values`: can be either an array of strings (you may need to use `as const` to get proper type
names) or a `GraphQLEnumValueConfigMap`. values is only required when param is not an enum
* `name`: required when param is an enum
## `addScalarType(name, scalar, options)`
* `name`: A key of the `Interface` property in `SchemaTypes`
* `scalar`: A `GraphQLScalar`
## `scalarType(name, options)`
* `name`: A key of the `Interface` property in `SchemaTypes`
* `options`: `ScalarTypeOptions`
### ScalarTypeOptions
```typescript
description?: string;
// Serializes an internal value to include in a response.
serialize: GraphQLScalarSerializer;
// Parses an externally provided value to use as an input.
parseValue?: GraphQLScalarValueParser;
// Parses an externally provided literal value to use as an input.
parseLiteral?: GraphQLScalarLiteralParser;
extensions?: Readonly>;
```
## `inputType(param, options)`
* `param`: a string or InputRef created by `builder.inputRef`
* `options`: `InputTypeOptions`
### `InputTypeOptions`
```typescript
type InputTypeOptions = {
description?: string;
fields: InputShape;
};
```
* `description`: A description of the current type
* `fields`: a function that receives an `InputFieldBuilder`, and returns an object of field names to
field definitions. See [`InputFieldBuilder`](./input-field-builder) for more details. If `name` is
a key of the `Input` property in `SchemaTypes`, shape will show type errors for any fields that do
not match the types provided in `SchemaTypes`.
## `inputRef(name)`
Creates a Ref object represent an input object that has not been implemented. This can be useful for
defining recursive input types, for building certain types of plugins, or when building a modular
schema where you don't want to define all types in SchemaTypes, or import the actual implementation
of each input type you use.
* `name`: string, name of the type that this ref represents. Can be overwritten when implemented.
* `T`: a type param to define the backing shape for the type that this ref represents
## `args(fields)`
Creates an arguments object which can be used as the `args` option in a field definition.
* `fields`: a function that receives an [`ArgBuilder`](./arg-builder), and returns an object of
field names to field definitions. See [`ArgBuilder`](./arg-builder) for more details.
## `toSchema(types)`
Takes an array of types created by [`SchemaBuilder`](./schema-builder#schemabuilder) and returns a
[`GraphQLSchema`](https://graphql.org/graphql-js/type/#graphqlschema)
## `SchemaBuilder.allowPluginReRegistration`
`SchemaBuilder.allowPluginReRegistration` is a static `boolean` on the SchemaBuilder class that can
be set to allow plugins to call registerPlugin multiple times. This is useful for hot-module
reloading, but is `false` by default to catch any issues with duplicate versions of a plugin.
# File layout
URL: /docs/guide/app-layout
Guide for Pothos app layouts
***
title: File layout
description: Guide for Pothos app layouts
-----------------------------------------
Pothos tries not to be opinionated about how you structure your code, and provides multiple ways of
doing many things. This short guide covers a few conventions I use, as a starting place for anyone
who is just looking for a decent setup that should just work. Everything suggested here is just a
recommendation and is completely optional.
## Files
Here are a few files I create in almost every Pothos schema I have built:
* `src/server.ts`: Setup and run your server (This might be graphql-yoga or @apollo/server)
* `src/builder.ts`: Setup for your schema builder. Does not contain any definitions for types in
your schema
* `src/schema.ts` or `src/schema/index.ts`: Imports all the files that define part of your schema,
but does not define types itself. Exports `builder.toSchema()`
* `src/types.ts`: Define shared types used across your schema including a type for your context
object. This should be imported when creating your builder, and may be used by many other files.
* `src/schema/*.ts`: Actual definitions for your schema types.
## Imports
Import types directly from the files that define them rather than importing from another file like
`index.ts` that re-exports them. `index.ts` files can still be useful for loading all files in a
directory, but they should generally NOT export any values.
## Plugins
Which plugins you use is completely up to you. For my own projects, I will use the `simple-objects`,
`scope-auth`, and `mocks` plugins in every project, and some of the other plugins as needed.
`mocks` and `scope-auth` are fairly self explanatory. The `simple-objects` plugin can make building
out a graph much quicker, because you don't have to have explicit types or models for every object
in your graph. I frequently find that I just want to add an object of a specific shape, and then let
the parent field figure out how to return an object of the right shape.
## Backing models
Pothos gives you a lot of control over how you define the types that your schema and resolvers use,
which can make figuring out the right approach confusing at first. In my projects, I try to avoid
using the `SchemaTypes` approach for defining backing models. Instead, I tend to use model classes
for defining most of the important objects in my graph, and fall back to using either the
simple-objects plugin or `builder.objectRef(name).implement({...})` when it does not make
sense to define a class for my data.
## Co-locating queries
In bigger graphs, having all your queries/entry points defined in one place can become hard to
manage. Instead, I prefer to define queries alongside the types they return. For example, queries
for a `User` type would be defined in the same file that contains the definition for the `User`
type, rather than in a central `queries.ts` file (using `builder.queryField`).
# Using args
URL: /docs/guide/args
Guide for defining field args in Pothos
***
title: Using args
description: Guide for defining field args in Pothos
----------------------------------------------------
Similar to the [Fields Guide](./fields), the examples here will mostly be for the Query type, but
the same patterns can be used anywhere that arguments for fields can be defined, including both
Object and Interface types.
## Scalars
Scalar args can be defined a couple of different ways
### Using the `t.arg` method
```typescript
const Query = builder.queryType({
fields: (t) => ({
string: t.string({
args: {
string: t.arg({
type: 'String',
description: 'String arg',
}),
},
resolve: (parent, args) => args.string,
}),
}),
});
```
### Using convenience methods
```typescript
const Query = builder.queryType({
fields: (t) => ({
withArgs: t.stringList({
args: {
id: t.arg.id(),
int: t.arg.int(),
float: t.arg.float(),
boolean: t.arg.boolean(),
string: t.arg.string(),
idList: t.arg.idList(),
intList: t.arg.intList(),
floatList: t.arg.floatList(),
booleanList: t.arg.booleanList(),
stringList: t.arg.stringList(),
},
resolve: (root, args) => Object.keys(args),
}),
}),
});
```
## Other types
Args of non-scalar types can also be created with the `t.arg` method.
Valid arg types include `Scalars`, `Enums`, and `Input` types.
```typescript
const LengthUnit = builder.enumType('LengthUnit', {
values: { Feet: {}, Meters: {} },
});
const Giraffe = builder.objectType('Giraffe', {
fields: t => ({
height: t.float({
args: {
unit: t.arg({
type: LengthUnit,
}),
},
resolve: (parent, args) =>
args.unit === 'Feet' ? parent.heightInMeters * 3.281 : parent.heightInMeters,
}),
}),
}));
```
## Required args
Arguments are optional by default, but can be made required by passing `required: true` in the
argument options. This default can be changed in the SchemaBuilder constructor, see
[Changing Default Nullability](./changing-default-nullability).
```typescript
const Query = builder.queryType({
fields: (t) => ({
nullableArgs: t.stringList({
args: {
optional: t.arg.string(),
required: t.arg.string({ required: true }),
requiredList: t.arg.stringList({ required: true }),
sparseList: t.stringList({
required: {
list: true,
items: false,
},
}),
},
resolve: (parent, args) => Object.keys(args),
}),
}),
});
```
Note that by default even if a list arg is optional, the items in that list are not. The last
argument in the example above shows how you can make list items optional.
## Lists
To create a list argument, you can wrap the type in an array or use one of the helpers
```typescript
const Query = builder.queryType({
fields: (t) => ({
giraffeNameChecker: t.booleanList({
args: {
names: t.arg.stringList({
required: true,
}),
moreNames: t.arg({
type: ['String'],
required: true,
}),
},
resolve: (parent, args) => {
return [...args.names, ...args.moreNames].filter((name) =>
['Gina', 'James'].includes(name),
);
},
}),
}),
});
```
## Nested Lists
You can use `t.arg.listRef` to create a list of lists
```typescript
const Query = builder.queryType({
fields: (t) => ({
example: t.boolean({
args: {
listOfListOfStrings: t.arg({
type: t.arg.listRef(t.arg.listRef('String')),
}),
listOfListOfNullableStrings: t.arg({
type: t.arg.listRef(
// By default listRef creates a list of Non-null items
// This can be overridden by passing in required: false
t.arg.listRef('String', { required: false }),
{ required: true },
),
}),
},
resolve: (parent, args) => {
return true;
},
}),
}),
});
```
# Default nullability
URL: /docs/guide/changing-default-nullability
Guide for changing default nullability in Pothos
***
title: Default nullability
description: Guide for changing default nullability in Pothos
-------------------------------------------------------------
By default, Fields and arguments in Pothos are Nullable. This default can be changed be overwritten by
setting `nullable: false` in the options for output fields and by setting `required: true` for input
fields or arguments.
These defaults may not be the right choice for every application, and changing them on every field
can be a pain. Instead, Pothos allows overwriting these defaults when setting up your SchemaBuilder.
You will need to provide the new defaults in two places:
1. In the type parameter for the builder, which enables the type checking to work with your new
settings.
2. In the Builder options, so that the correct schema is built at run time.
```typescript
// Create a Builder that makes output fields nullable by default
export const builder = new SchemaBuilder<{
DefaultFieldNullability: false;
}>({
defaultFieldNullability: false,
});
// Create a Builder that makes input fields and arguments required by default
export const builder = new SchemaBuilder<{
DefaultInputFieldRequiredness: true;
}>({
defaultInputFieldRequiredness: true,
});
```
# Circular References
URL: /docs/guide/circular-references
Guide for how circular references and dependencies are managed in Pothos
***
title: Circular References
description: Guide for how circular references and dependencies are managed in Pothos
-------------------------------------------------------------------------------------
Circular references and Circular dependencies are common problem that can appear in a number of
ways, and cause a variety of different issues.
Pothos has a number of built in mitigations to help avoid these issues, and tries to provide
additional APIs to help with situations that can't be automatically avoided.
This guide should provide some insight into how to resolve any issues you may run into, but will
hopefully not be needed very often.
## imports
Circular imports are something that can cause issues in any javascript or typescript project, but
can become more common in graphql because of its interconnected nature.
When js/ts files either directly or indirectly import each other, the exports from one file will
initially be undefined while executing the main body of the other. These issues often result in
confusing and unrelated errors because the relevant values are often not used until much later.
Pothos mitigates this by deferring a lot of the processing until the `builder.toSchema()` method is
called. As long as the file that builds the schema (calls the `toSchema` method) is not imported by
any other file that defines parts of the schema, this will ensure that all types are properly
imported, and types are not unexpectedly undefined.
As you can see in the example below, the references to `Post` and `User` when defining fields are
wrapped inside a the `fields` function. Because this function is not executed until the schema is
loaded, these types of Circular imports should work without causing any issues.
```ts
// user.ts
import { Post } from './post'
export const User = builder.objectType('User', {
fields: t => ({ posts: t.expose('posts', { type: [Post]}) }),
})
// post.ts
import { User } from './user'
export const Post = builder.objectType('Post', {
fields: t => ({ author: t.expose('author', {{ type: User }}) }),
})
// schema.js
export const schema = builder.toSchema()
```
Another best practice is to avoid importing from `index.ts` files by importing from the file that
defines the value directly. The easiest way to achieve this is by not exporting values from
`index.ts` files.
```ts
// bad
export * from './enums';
export * from './objects';
// better
import './enums';
import './objects';
```
## Defining Circular or Recursive types
A large portion of the Pothos API is designed to work well with circular references, but there are a
few cases where typescript is unable to resolve circular references correctly.
What should work without any issues:
* Objects and interfaces referenced via a class
* Objects and interfaces referenced via a string (by proving a type mapping when creating the
SchemaBuilder)
* Objects defined by plugins like Prisma that derive type information from an external source
Cases that may require some modification
* Input objects with circular references
* Object types defined with `builder.objectRef`
* Objects defined by plugins like `dataloader` that infer the Backing mode type from options passed
to the type.
## Input objects
Defining recursive input types is described in the [input Guide](./inputs#recursive-inputs)
## Object refs
Object refs may cause issues with circular references if the refs are implemented before they are
assigned to a variable. This can easily be avoided by moving the call to `ref.implement` into its
own statement.
```typescript
// May cause issues
export const User = builder.objectRef('User').implement({...});
// Should be safe
export const User = builder.objectRef('User')
User.implement({...});
```
Using object refs is often a great way to avoid issues with circular references because it allows
you to define the reference before defining any fields for your type. Many of the builder methods in
Pothos and its plugins can be passed a type ref instead of a name:
```typescript
export const User = builder.objectRef('User');
builder.objectType(User, {
field: (t) => ({
// Circular references here won't cause issues, because User is already defined above
}),
});
```
## Defining fields separately
Another easy work around is to define any fields that are causing issues separately
```ts
export const User = builder.objectRef('User').implement({
fields: (t) => ({ posts: t.expose('posts', { type: [Post] }) }),
});
export const Post = builder.objectRef('Post').implement({
fields: (t) => ({
// No more circular reference
}),
});
builder.objectField(Post, 'author', (t) => t.expose({ type: User }));
```
# Using Context
URL: /docs/guide/context
Guide for using context object in Pothos
***
title: Using Context
description: Guide for using context object in Pothos
-----------------------------------------------------
The GraphQL context object can be used to give every resolver in the schema access to some shared
state for the current request. One common use case is to store the current User on the context
object.
One important thing to note about Pothos is that every request is assumed to have a new unique
context object, so be sure to set up your context objects in a way that they are unique to each
request.
First let's define a User class that holds information about a user, and create a SchemaBuilder with
a Context type that has a currentUser property.
```typescript
class User {
id: string;
firstName: string;
username: string;
constructor(id: string, firstName: string, username: string) {
this.id = id;
this.firstName = firstName;
this.username = username;
}
}
const builder = new SchemaBuilder<{
Context: {
currentUser: User;
};
}>({});
```
Next, we will want to add something in our schema that uses the current user:
```typescript
builder.queryType({
fields: (t) => ({
currentUser: t.field({
type: User,
resolve: (root, args, context) => context.currentUser,
}),
}),
});
builder.objectType(User, {
fields: (t) => ({
id: t.exposeID('id', {}),
firstName: t.exposeString('firstName', {}),
username: t.exposeString('username', {}),
}),
});
```
Finally, we need to actually create our context when a request is created.
```typescript
const yoga = createYoga({
schema,
context: async ({ req }) => ({
// This part is up to you!
currentUser: await getUserFromAuthHeader(req.headers.authorization),
}),
});
const server = createServer(yoga);
server.listen(3000);
```
## Initialize context cache
Several Pothos plugins use the context object to cache data for the current request. Some examples
include dataloaders and auth scopes. This caching mechanism works based on the assumption that the
same context object is passed to every resolver in a request, and each request has a unique context
object. This works for most applications without any additional configuration.
In some rare edge cases, you may have some additional logic added to your application that clones or
mutates the context object throughout the execution of a request. To ensure that all plugins work
correctly even if the context object is cloned, wrapped, or modified in a way that does not preserve
its identity, you can manually initialize the context cache and attach it to the context object:
```typescript
import { initContextCache } from '@pothos/core';
const server = createYoga({
schema: builder.toSchema(),
context: async ({ req }) => ({
// Adding this will prevent any issues if you server implementation
// copies or extends the context object before passing it to your resolvers
...initContextCache(),
currentUser: await getUserFromAuthHeader(req.headers.authorization),
}),
});
const server = createServer(yoga);
server.listen(3000);
```
## Context when using multiple protocols
In some specific situations multiple protocols could be used for handling the graphql operations against the same executable graphql schema. One common example of this is using [HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP)(Hypertext Transfer Protocol) protocol for handling graphql query and mutation operations and using [Websocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) protocol for handling graphql subscription operations. Because the protocols are different, the protocol specific information that might be passed in the graphql context could differ depending on the graphql operation that is being executed. **Now, our personal recommendation is to keep your executable graphql schema and its inner layers protocol agnostic to not have to deal with a situation like this.**
We're working with two different graphql contexts within our graphql resolvers and we want strong type-safety while working with them. For this use case we recommend using [typescript discriminated unions](https://www.typescriptlang.org/docs/handbook/2/narrowing.html#discriminated-unions) for combining types for different graphql contexts into a single union type that can be passed to pothos schema builder initializer. In the following example `Context` is a union type between graphql context types for HTTP and Websocket protocol specific graphql contexts, where the `isSubscription` boolean field is the discriminator. This context type is passed as the type for `Context` field in the generic accepted by the pothos schema builder initializer. Within the resolver implementations for a graphql schema created using this pothos schema builder, the graphql context can be discriminated between its two protocol specific types by using the `isSubscription` field. This would help us get the type-safe graphql context that we can make use of in our graphql resolvers. In the following code we perform this discrimination by checking the value of `isSubscription` boolean field in the `if` and `else` blocks within the graphql resolvers:
```typescript
type Context =
| {
isSubscription: false;
http: "HTTP specific context field."
}
| {
isSubscription: true;
websocket: "Websocket specific context field.";
};
const builder = new SchemaBuilder<{
Context: Context;
}>({});
builder.mutationType({
fields: (t) => ({
incrementCount: t.int({
resolve: (parent, args, ctx) => {
if (ctx.isSubscription === false) {
// Access the HTTP protocol specific context fields.
ctx.http;
} else {
// Access the Websocket protocol specific context fields.
ctx.websocket;
}
},
}),
}),
});
builder.subscriptionType({
fields: (t) => ({
currentCount: t.int({
subscribe: (parent, args, ctx) => {
if (ctx.isSubscription === false) {
// Access the HTTP protocol specific context fields.
ctx.http;
} else {
// Access the Websocket protocol specific context fields.
ctx.websocket;
}
},
}),
}),
});
```
# Using Deno
URL: /docs/guide/deno
Guide for using Pothos with deno
***
title: Using Deno
description: Guide for using Pothos with deno
---------------------------------------------
Pothos is compatible with [Deno](https://deno.land/), and can be used with
[GraphQL Yoga](https://the-guild.dev/graphql/yoga-server) which now also supports deno!
## Imports
There are a number of different ways to import Pothos, but the best option is ussually to set up
[import maps](https://deno.land/manual@v1.28.3/basics/modules/import_maps) and import from
[esm.sh](https://esm.sh).
### Import maps
```json
// import_maps.json
{
"imports": {
// define a version of graphql, this should be shared by all graphql libraries
"graphql": "https://esm.sh/graphql@16.6.0",
// Marking graphql as external will all the graphql from this import_map to be used
"graphql-yoga": "https://esm.sh/graphql-yoga?external=graphql",
// the `*` prefix in the package name marks all depenencies (only graphql in this case) as external
// this ensures the version of graphql defined above is used
"@pothos/core": "https://esm.sh/*@pothos/core@3.23.1",
// Plugins should mark all dependencies as external as well
// this will ensure that both graphql and @pothos/core use the versions defined above
// some plugins like validation may require additional dependencies to be added to the import map (eg. zod)
"@pothos/plugin-relay": "https://esm.sh/*@pothos/plugin-relay@3.30.0"
}
}
```
### deno.json
```json
// deno.jsonc
{
"importMap": "import_map.json"
}
```
## Server
```typescript
// src/index.ts
import { serve } from 'https://deno.land/std@0.157.0/http/server.ts';
import { createYoga } from 'graphql-yoga';
import SchemaBuilder from '@pothos/core';
const builder = new SchemaBuilder({});
builder.queryType({
fields: (t) => ({
hello: t.string({
args: {
name: t.arg.string({}),
},
resolve: (_, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
const yoga = createYoga({
schema: builder.toSchema(),
});
serve(yoga, {
onListen({ hostname, port }) {
console.log(`Listening on http://${hostname}:${port}/graphql`);
},
});
```
## Running the app:
```bash
deno run --allow-net src/index.ts
```
## Without import maps
In some cases (like when using the deno deploy playground) you may not be able to use import maps.
In this case, you can use query parameters with esm.sh to ensure that shared versions of packages
are used:
```ts
import { serve } from 'https://deno.land/std@0.157.0/http/server.ts';
// for graphql-yoga and pothos/core 'graphql' is the most import depencency to pin
import { createYoga } from 'https://esm.sh/graphql-yoga@3.1.1?deps=graphql@16.6.0';
import SchemaBuilder from 'https://esm.sh/@pothos/core@3.23.1?deps=graphql@16.6.0';
// for pothos plugins, you should pin both 'graphql' and '@pothos/core'
import RelayPlugin from 'https://esm.sh/@pothos/plugin-relay@3.30.0?deps=graphql@16.6.0,@pothos/core@3.23.1';
```
## The @pothos/deno package
The `@pothos/deno` package contains a typescript-only version of most of the pothos plugins. This is
no longer the recommended way to use pothos with deno, but will continue to be published with new
changes.
The files for this package are published to npm, and can be consumed from a number of CDNs. The
benefit of this is that all plugins are bundled with pothos/core, and import directly so you do not
need to mess with dependencies to ensure that plugins are using the correct version of pothos/core.
### example
```typescript
// dependencies of @pothos/deno are imported from https://cdn.skypack.dev/{package} to ensure
// that the same version of 'graphql' is used, import other dependencies from sky pack as well
import { serve } from 'https://deno.land/std@0.157.0/http/server.ts';
import { createYoga } from 'https://cdn.skypack.dev/graphql-yoga@3.1.1';
import SchemaBuilder from 'https://esm.sh/@pothos/deno/packages/core/mod.ts';
import RelayPlugin from 'https://esm.sh/@pothos/deno/packages/plugin-relay/mod.ts';
```
Pothos uses `https://cdn.skypack.dev/graphql?dts` which can be added to an import map to import a
different version of graphql.
# Enums
URL: /docs/guide/enums
Guide for defining Enum types in Pothos
***
title: Enums
description: Guide for defining Enum types in Pothos
----------------------------------------------------
Enums can be defined a number of different ways:
1. Using typescript enums
```typescript
export enum Diet {
HERBIVOROUS,
CARNIVOROUS,
OMNIVORIOUS,
}
builder.enumType(Diet, {
name: 'Diet',
});
```
2. Using an array of strings
```typescript
export const LengthUnit = builder.enumType('LengthUnit', {
values: ['Feet', 'Meters'] as const,
});
```
Note that we use `as const` to allow ts to properly type our enum values.
3. Using a values object:
```typescript
export const GiraffeSpecies = builder.enumType('GiraffeSpecies', {
values: {
Southern: {
description: 'Also known as two-horned giraffe',
value: 'giraffa',
},
Masai: {
value: 'tippelskirchi',
},
Reticulated: {
value: 'reticulata',
},
Northern: {
value: 'camelopardalis',
},
} as const,
});
```
Again we use `as const` here to allow the enum values to be correctly inferred. The `as const`
can also be added to the values instead, or omitted if the `values` are already defined using a
variable that typescript can type correctly.
Using a values object like this enables defining additional options like a description for each
enum value.
Using a values object also allows the name of the enum value to be different from the typescript
value used internally in your resolvers.
The keys (eg `Southern`) are used as the name of the enum value in you GraphQL schema, and the
`value` (eg. `'giraffa'`) property is used as the value you will receive in the arguments for
your resolvers, or the value you need to return from your resolvers. This is similar to how
typescript enum values can be assigned string or numeric values.
4. Using an object with `as const`
```ts
const VehicleType = {
sedan: 'SEDAN',
suv: 'SUV',
truck: 'TRUCK',
motorcycle: 'MOTORCYCLE',
} as const;
const VehicleTypeEnum = builder.enumType('VehicleType', {
values: Object.fromEntries(
Object.entries(VehicleType).map(([name, value]) => [name, { value }]),
),
});
```
Modern TypeScript may prefer using objects with as const over enums to align with JavaScript
standards. This approach essentially mirrors the "array of strings" method. You can use
`Object.toEntries` and `Object.fromEntries` to turn convert to Object values form described
above.
For more detailed information, you can refer to the TypeScript handbook on
[Objects vs Enums](https://www.typescriptlang.org/docs/handbook/enums.html#objects-vs-enums).
Alternatively, using `Object.keys` or `Object.values` will allow you to produce an enum that uses
just the keys or values of the object for both the internal typescript and name in the GraphQL
schema.
```ts
const VehicleType = {
sedan: 'SEDAN',
suv: 'SUV',
truck: 'TRUCK',
motorcycle: 'MOTORCYCLE',
} as const;
const VehicleTypeEnum = builder.enumType('VehicleType', {
values: Object.values(VehicleType),
});
// Or
const VehicleTypeEnum = builder.enumType('VehicleType', {
values: Object.keys(VehicleType) as (keyof typeof VehicleType)[],
});
```
## Using Enum Types
Enums can be referenced either by the `Ref` that was returned by calling `builder.enumType` or by
using the typescript enum. They can be used either as arguments, or as field return types:
```typescript
builder.objectFields('Giraffe', (t) => ({
height: t.float({
args: {
unit: t.arg({
type: LengthUnit,
required: true,
defaultValue: 'Meters',
}),
},
resolve: (parent, args) =>
args.unit === 'Meters' ? parent.heightInMeters : parent.heightInMeters * 3.281,
}),
diet: t.field({
description:
'While Giraffes are herbivores, they do eat the bones of dead animals to get extra calcium',
type: Diet,
resolve: () => Diet.HERBIVOROUS,
}),
species: t.field({
type: GiraffeSpecies,
resolve: () => 'camelopardalis' as const,
}),
}));
```
# Fields
URL: /docs/guide/fields
Guide for defining fields in Pothos
***
title: Fields
description: Guide for defining fields in Pothos
------------------------------------------------
Fields for [Object](./objects) and [Interface](./interfaces) types are defined using a shape
function. This is a function that takes a [FieldBuilder](../api/field-builder) as an argument, and
returns an object whose keys are field names, and whose values are fields created by the
[FieldBuilder](../api/field-builder). These examples will mostly add fields to the `Query` type, but
the topics covered in this guide should apply to any object or interface type.
## Scalars
Scalar fields can be defined a couple of different ways:
### Field method
```typescript
builder.queryType({
fields: (t) => ({
name: t.field({
description: 'Name field',
type: 'String',
resolve: () => 'Gina',
}),
}),
});
```
### Convenience methods
Convenience methods are just wrappers around the `field` method that omit the `type` option.
```typescript
builder.queryType({
fields: (t) => ({
id: t.id({ resolve: () => '123' }),
int: t.int({ resolve: () => 123 }),
float: t.float({ resolve: () => 1.23 }),
boolean: t.boolean({ resolve: () => false }),
string: t.string({ resolve: () => 'abc' }),
idList: t.idList({ resolve: () => ['123'] }),
intList: t.intList({ resolve: () => [123] }),
floatList: t.floatList({ resolve: () => [1.23] }),
booleanList: t.booleanList({ resolve: () => [false] }),
stringList: t.stringList({ resolve: () => ['abc'] }),
}),
});
```
## Other types
Fields for non-scalar fields can also be created with the `field` method.
Some types like [Objects](./objects) and [Interfaces](./interfaces) can be referenced by name if
they have a backing model defined in the schema builder.
```typescript
const builder = new SchemaBuilder<{
Objects: { Giraffe: { name: string } };
}>({});
builder.queryType({
fields: t => ({
giraffe: t.field({
description: 'A giraffe'
type: 'Giraffe',
resolve: () => ({ name: 'Gina' }),
}),:
}),
});
```
For types not described in the `SchemaTypes` type provided to the builder, including types that can
not be added there like [Unions](./unions) and [Enums](./enums), you can use a `Ref` returned by the
builder method that created them in the `type` parameter. For types created using a class
([Objects](./enums) or [Interfaces](./interfaces)) or [Enums](./enums) created using a typescript
enum, you can also use the `class` or `enum` that was used to define them.
```typescript
const LengthUnit = builder.enumType('LengthUnit', {
values: { Feet: {}, Meters: {} },
});
builder.objectType('Giraffe', {
fields: (t) => ({
preferredNeckLengthUnit: t.field({
type: LengthUnit,
resolve: () => 'Feet',
}),
}),
});
builder.queryType({
fields: (t) => ({
giraffe: t.field({
type: 'Giraffe',
resolve: () => ({ name: 'Gina' }),
}),
}),
});
```
## Lists
To create a list field, you can wrap the the type in an array
```typescript
builder.queryType({
fields: t => ({
giraffes: t.field({
description: 'multiple giraffes'
type: ['Giraffe'],
resolve: () => [{ name: 'Gina' }, { name: 'James' }],
}),
giraffeNames: t.field({
type: ['String'],
resolve: () => ['Gina', 'James'],
})
}),
});
```
## NonNullable fields
Fields in Pothos are nullable by default, but fields can be made NonNullable by setting the
`nullable` option to `false`. This default can also be changed in the SchemaBuilder constructor, see
[Changing Default Nullability](./changing-default-nullability) for more details.
```typescript
builder.queryType({
fields: (t) => ({
nonNullableField: t.field({
type: 'String',
nullable: false,
resolve: () => null,
}),
nonNullableString: t.string({
nullable: false,
resolve: () => null,
}),
nonNullableList: t.field({
type: ['String'],
nullable: false,
resolve: () => null,
}),
sparseList: t.field({
type: ['String'],
nullable: {
list: false,
items: true,
},
resolve: () => [null],
}),
}),
});
```
Note that by default even if a list field is nullable, the items in that list are not. The last
example above shows how you can make list items nullable.
## Exposing fields from the underlying data
Some GraphQL implementations have a concept of "default resolvers" that can automatically resolve
fields that have a property of the same name in the underlying data. In Pothos, these relationships
need to be explicitly defined, but there are helper methods that make exposing fields easier.
These helpers are not available for root types (Query, Mutation and Subscription), but will work
on any other object type or interface.
```typescript
const builder = new SchemaBuilder<{
Objects: { Giraffe: { name: string } };
}>({});
builder.objectType('Giraffe', {
fields: (t) => ({
name: t.exposeString('name', {}),
}),
});
```
The available expose helpers are:
* `exposeString`
* `exposeInt`
* `exposeFloat`
* `exposeBoolean`
* `exposeID`
* `exposeStringList`
* `exposeIntList`
* `exposeFloatList`
* `exposeBooleanList`
* `exposeIDList`
## Arguments
Arguments for a field can be defined in the options for a field:
```typescript
builder.queryType({
fields: (t) => ({
giraffeByName: t.field({
type: 'Giraffe',
args: {
name: t.arg.string({ required: true }),
},
resolve: (root, args) => {
if (args.name !== 'Gina') {
throw new NotFoundError(`Unknown Giraffe ${name}`);
}
return { name: 'Gina' };
},
}),
}),
});
```
For more information see the [Arguments Guide](./args).
## Adding fields to existing type
In addition to being able to define fields when defining types, you can also add additional fields
independently. This is useful for breaking up types with a lot of fields into multiple files, or
co-locating fields with their type (e.g., add all query/mutation fields for a type in the same file
where the type is defined).
```typescript
builder.queryFields((t) => ({
giraffe: t.field({
type: Giraffe,
resolve: () => new Giraffe('James', new Date(Date.UTC(2012, 11, 12)), 5.2),
}),
}));
builder.objectField(Giraffe, 'ageInDogYears', (t) =>
t.int({
resolve: (parent) => parent.age * 7,
}),
);
```
To see all the methods available for defining fields see the [SchemaBuilder API](./schema-builder)
## Nested Lists
You can use `t.listRef` to create a list of lists
```typescript
const Query = builder.queryType({
fields: (t) => ({
example: t.field({
type: t.listRef(
t.listRef('String'),
// items are non-nullable by default, this can be overridden
// by passing `nullable: true`
{ nullable: true },
),
resolve: (parent, args) => {
return [['a', 'b'], ['c', 'd'], null];
},
}),
}),
});
```
# Generating client types
URL: /docs/guide/generating-client-types
Guide for generating client types from a Pothos schema
***
title: Generating client types
description: Guide for generating client types from a Pothos schema
-------------------------------------------------------------------
Pothos does not have a built in mechanism for generating types to use with a client, but
[graphql-code-generator](https://www.graphql-code-generator.com/) can be configured to consume a
schema directly from your typescript files.
## Export your schema
The first thing you will need is a file that exports your built schema. The schema should be
exported as `schema` or as the default export. This will be used to generate your client types, but
can also be the schema you use in your server.
```typescript
// schema.ts
// Import the builder
import builder from './builder';
// Import your type definitions
import './types/Query';
import './types/User';
import './types/Posts';
// Build and export the schema
export const schema = builder.toSchema();
```
## Setting up graphql-code-generator
There are many different ways to set up graphql-code-generator, and the details depend a lot on your
needs.
See the
[graphql-code-generator documentation](https://www.graphql-code-generator.com/docs/getting-started/installation)
for more details.
### Install the codegen packages
```package-install
npm install --save graphql
npm install --save -D typescript @graphql-codegen/cli @graphql-codegen/client-preset
```
### Configure the codegen to import your schema
Create a `codegen.ts` file in the root of your project:
```typescript
import type { CodegenConfig } from '@graphql-codegen/cli';
import { printSchema } from 'graphql';
import { schema } from './src/schema';
const config: CodegenConfig = {
schema: printSchema(schema),
documents: ['src/**/*.tsx'],
generates: {
'./src/gql/': {
preset: 'client',
plugins: [],
},
},
};
export default config;
```
You can customize this config as needed, but the relevant parts are:
* Importing your GraphQL schema, this should be the result of calling `builder.toSchema({})`
* using `printSchema` from `graphql` to convert the schema to a string
## Generating a schema.graphql file with graphql-code-generator
You can generate a schema.graphql file with graphql-code-generator by adding the `schema-ast`
plugin:
```package-install
npm install --save -D @graphql-codegen/schema-ast
```
```typescript
// codegen.ts
import { printSchema } from 'graphql';
import type { CodegenConfig } from '@graphql-codegen/cli';
import { schema } from './src/schema';
const config: CodegenConfig = {
schema: printSchema(schema),
documents: ['src/**/*.tsx'],
generates: {
'./src/gql/': {
preset: 'client',
plugins: [],
},
'schema.graphql': {
plugins: ['schema-ast'],
},
},
};
export default config;
```
## Adding scalars
If you are using scalars (e.g. from `graphql-scalars`), you will need to add them to `codegen.ts` or
else they will resolve to `any`. Here is an example for `UUID` and `DateTime`:
```typescript
const config: CodegenConfig = {
...,
config: {
scalars: {
UUID: 'string',
DateTime: 'Date',
},
},
};
```
## Alternatives
In some cases you may want to use an alternative method for loading you schema.
### Printing the schema to a file
You can use the `printSchema` function from `graphql` to print your schema to a file, see
[Printing Schemas](/docs/guide/printing-schemas) for more details:
By writing the schema to a file, you will be able to load the schema from a file instead importing
it each time you want to generate your schema.
Having your schema written to a file, and checked into source control has many benifits, like easier
code reviews, and better interoperability with other schema dependent graphql tools, so setting this
up is worth while even if you do not need it for generating client types:
```typescript
import type { CodegenConfig } from '@graphql-codegen/cli';
const config: CodegenConfig = {
schema: './path/to/schema.graphql',
documents: ['src/**/*.tsx'],
generates: {
'./src/gql/': {
preset: 'client',
plugins: [],
},
},
};
export default config;
```
### Using introspection from your dev (or production) server
Rather than using a schema SDL file, graphql-code-generator can also can use introspection to load
your schema. To do this, you will need to ensure that your server has introspection enabled, most
servers will have introspection enabled by default in development, and disabled in production.
You can then configure graphql-code-generator to use introspection by passing the URL to your
graphql endpoint:
```typescript
import type { CodegenConfig } from '@graphql-codegen/cli';
const config: CodegenConfig = {
schema: 'https://localhost:3000/graphql',
documents: ['src/**/*.tsx'],
generates: {
'./src/gql/': {
preset: 'client',
plugins: [],
},
},
};
export default config;
```
# Guide
URL: /docs/guide
Guide for building GraphQL APIs with Pothos
***
title: Guide
description: Guide for building GraphQL APIs with Pothos
--------------------------------------------------------
## Installing
```package-install
npm install --save @pothos/core graphql-yoga
```
## Set up typescript
Pothos is designed to be as type-safe as possible, to ensure everything works correctly, make sure
that your `tsconfig.json` has `strict` mode set to true:
```json
{
"compilerOptions": {
"strict": true
}
}
```
## Create a simple schema
```typescript
import SchemaBuilder from '@pothos/core';
const builder = new SchemaBuilder({});
builder.queryType({
fields: (t) => ({
hello: t.string({
args: {
name: t.arg.string(),
},
resolve: (parent, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
const schema = builder.toSchema();
```
## Create a server
The schema generated by Pothos is a standard graphql.js schema and can be used with several graphql
server implementations including `graphql-yoga`.
```typescript
import { createYoga } from 'graphql-yoga';
import { createServer } from 'node:http';
const yoga = createYoga({
schema: builder.toSchema(),
});
const server = createServer(yoga);
server.listen(3000);
```
# Inferring Types
URL: /docs/guide/inferring-types
Inferring typescript Types from Refs
***
title: Inferring Types
description: Inferring typescript Types from Refs
-------------------------------------------------
In some cases you may want to use the types from your input of object refs to build helpers, or
provide accurate types for other functions.
To get types from any Pothos `ref` object, you can use the `$inferType` and `$inferInput` properties
on the ref. This pattern is inspired by [drizzle ORM](https://orm.drizzle.team/).
```ts
const MyInput = builder.inputType('MyInput', {
fields: (t) => ({
id: t.id({ required: true }),
name: t.string({ required: true }),
}),
});
// { id: string; name: string; }
type MyInputShape = typeof MyInput.$inferInput;
// infer the shape of the Prisma User model
const UserRef = builder.prismaObject('User', {});
type UserType = typeof UserRef.$inferType;
```
When building helpers, most Pothos types have a generic called `Types` that extends `SchemaTypes`.
This combines all the defaults and settings passed in when creating the SchemaBuilder. To make you
own type helpers and utility functions, you often need access the the `Types` used by your builder.
This can be inferred from the builder using `typeof builder.$inferSchemaTypes`.
The following is a simple helper that for creating objects that have an `id` field. The helper
itself isn't that useful, but shows how inferring SchemaTypes from a builder can work.
```ts
type BuilderTypes = typeof builder.$inferSchemaTypes;
function createObjectWithId(
name: string,
fields: (t: PothosSchemaTypes.ObjectFieldBuilder) => FieldMap,
) {
const ref = builder.objectRef(name);
ref.implement({
fields: (t) => ({
...fields(t),
id: t.id({
resolve: (parent) => parent.id,
nullable: false,
}),
}),
});
return ref;
}
createObjectWithId<{
id: string;
name: string;
}>('User', (t) => ({
name: t.exposeString('name'),
}));
```
Rather than explicitly using the inferred type, you can also infer SchemaTypes from the builder in
an argument. In the following example, we pass in the builder to the createPaginationArgs, and infer
the `Types` from the provided builder. This is useful when building helpers that might be used with
multiple builder instances.
```ts
function createPaginationArgs(
builder: PothosSchemaTypes.SchemaBuilder,
) {
return builder.args((t) => ({
limit: t.int(),
offset: t.int(),
}));
}
builder.queryField('getUsers', (t) =>
t.field({
type: [Shaveable],
args: {
...createPaginationArgs(builder),
},
resolve: () => [],
}),
);
```
# Input Objects
URL: /docs/guide/inputs
Guide for defining Input Object types in Pothos
***
title: Input Objects
description: Guide for defining Input Object types in Pothos
------------------------------------------------------------
## Creating Input objects
Input objects can be created using `builder.inputType`.
```typescript
const GiraffeInput = builder.inputType('GiraffeInput', {
fields: (t) => ({
name: t.string({ required: true }),
birthdate: t.string({ required: true }),
height: t.float({ required: true }),
}),
});
builder.mutationType({
fields: (t) => ({
createGiraffe: t.field({
type: Giraffe,
args: {
input: t.arg({ type: GiraffeInput, required: true }),
},
resolve: (root, args) =>
new Giraffe(args.input.name, new Date(args.input.birthdate), args.input.height),
}),
}),
});
```
## Recursive inputs
Types for recursive inputs get slightly more complicated to implement because their types can't
easily be inferred. Referencing other input types works without any additional logic, as long as
there is no circular reference to the original type.
To build input types with recursive references you can use `builder.inputRef` along with a type or
interface that describes the fields of your input object. The builder will still ensure all the
types are correct, but needs type definitions to help infer the correct values.
```typescript
interface RecursiveGiraffeInputShape {
name: string;
birthdate: string;
height: number;
friends?: RecursiveGiraffeInputShape[];
}
const RecursiveGiraffeInput = builder
.inputRef('RecursiveGiraffeInput')
.implement({
fields: (t) => ({
name: t.string({ required: true }),
birthdate: t.string({ required: true }),
height: t.float({ required: true }),
friends: t.field({
type: [RecursiveGiraffeInput],
}),
}),
});
builder.mutationType({
fields: (t) => ({
createGiraffeWithFriends: t.field({
type: [Giraffe],
args: {
input: t.arg({ type: RecursiveGiraffeInput, required: true }),
},
resolve: (root, args) => {
const friends = (args.input.friends || []).map(
(friend) =>
new Giraffe(args.input.name, new Date(args.input.birthdate), args.input.height),
);
return [
new Giraffe(args.input.name, new Date(args.input.birthdate), args.input.height),
...friends,
];
},
}),
}),
});
```
## Additional way to define Input types
If you're unable to use the builder ref directly by assigning it to a variable as depicted above,
you can provide an `Inputs` type to the `SchemaBuilder`.
This is useful in a scenario where you have multiple schema builders.
```typescript
const builder = new SchemaBuilder<{
Inputs: {
GiraffeInput: {
name: string;
birthdate: string;
height: number;
};
};
}>({});
builder.inputType('GiraffeInput', {
fields: (t) => ({
name: t.string({ required: true }),
birthdate: t.string({ required: true }),
height: t.float({ required: true }),
}),
});
builder.mutationType({
fields: (t) => ({
createGiraffe: t.field({
type: Giraffe,
args: {
input: t.arg({ type: 'GiraffeInput', required: true }),
},
resolve: (root, args) =>
new Giraffe(args.input.name, new Date(args.input.birthdate), args.input.height),
}),
}),
});
```
# Interfaces
URL: /docs/guide/interfaces
Guide for defining Interface types in Pothos
***
title: Interfaces
description: Guide for defining Interface types in Pothos
---------------------------------------------------------
## Defining Interface Types
Defining interfaces works exactly like [defining Objects](./objects), using `Interfaces` key in
SchemaTypes object for the builder, and `interfaceRef` rather than `objectRef`.
In this example we'll use an Animal class and a Giraffe class that extends it:
```typescript
export class Animal {
diet: Diet;
constructor(diet: Diet) {
this.diet = diet;
}
}
export class Giraffe extends Animal {
name: string;
birthday: Date;
heightInMeters: number;
constructor(name: string, birthday: Date, heightInMeters: number) {
super(Diet.HERBIVOROUS);
this.name = name;
this.birthday = birthday;
this.heightInMeters = heightInMeters;
}
}
export enum Diet {
HERBIVOROUS,
CARNIVOROUS,
OMNIVORIOUS,
}
```
Again, using classes is completely optional. The only requirement for interfaces is that the the
type used for defining objects must be a superset of the the types of any interfaces they implement.
Now that we have our classes set up we can define the interface type. and add a enum definitions for
our diet field:
```typescript
builder.interfaceType(Animal, {
name: 'AnimalFromClass',
fields: (t) => ({
diet: t.expose('diet', {
type: Diet,
}),
}),
});
builder.enumType(Diet, {
name: 'Diet',
});
```
## implementing interfaces with object types
```typescript
builder.objectType(Giraffe, {
name: 'Giraffe',
interfaces: [Animal],
isTypeOf: (value) => value instanceof Giraffe,
fields: (t) => ({
name: t.exposeString('name', {}),
}),
});
```
There are 2 new properties here: `interfaces` and `isTypeOf`.
Interfaces is an array of interfaces that the object type implements, and `isTypeOf` is a function
that is run whenever we have an object of the interface type and we want to see if it's actually an
instance of our object type.
## Using an Interface as a return type
Using interfaces as return types for fields works just like objects:
```typescript
builder.queryFields((t) => ({
animal: t.field({
type: 'Animal',
resolve: () => new Giraffe('James', new Date(Date.UTC(2012, 11, 12)), 5.2),
}),
}));
```
## Querying interface fields
We can query interface fields like diet on any field that returns a giraffe:
```graphql
query {
giraffe {
name
diet
}
}
```
or we can query a field that returns an interface and select different fields depending on the
concrete type:
```graphql
query {
animal {
diet
... on Giraffe {
name
}
}
}
```
# Objects
URL: /docs/guide/objects
Guide for defining Object types in Pothos
***
title: Objects
description: Guide for defining Object types in Pothos
------------------------------------------------------
This will walk you through creating your first object types, some concepts in this guide will be
explained further in later guides.
### Defining an Object type
When adding a new type to your schema, you'll need to figure how the data behind this type will be
represented. Pothos entirely decouples your data from your GraphQL schema, and has many different
ways to implement Objects in your schema.
In this guide, we will be implementing a `Giraffe` object type:
```typescript
interface Giraffe {
name: string;
birthday: Date;
heightInMeters: number;
}
```
The easiest way to create a new Object based in an existing Typescript type is wit the `objectRef`
method:
```typescript
const builder = new SchemaBuilder({});
const GiraffeRef = builder.objectRef('Giraffe');
```
This will create a new `ObjectRef` that can be used to reference the `Giraffe` type in other parts
of the schema. By passing in the Giraffe interface, We give the `ObjectRef` the information it needs
to ensure that fields we add to the Giraffe type are type-safe, and that any fields that reference
the Giraffe Field return the expected data.
Next, We can need to add an implementation for the `Giraffe` type:
```typescript
const GiraffeRef = builder.objectRef('Giraffe');
GiraffeRef.implement({
description: 'Long necks, cool patterns, taller than you.',
fields: (t) => ({}),
});
```
In the implementation, we can add a description (optional) and a function to define the fields
available to query on the Giraffe type.
### Add some fields
The `fields` function receives a `FieldBuilder` instance that can be used to define the fields for
your type. the `FieldBuilder` will be covered in more details in the [fields guide](./fields).
```typescript
GiraffeRef.implement(Giraffe, {
fields: (t) => ({
name: t.exposeString('name'),
height: t.exposeFloat('heightInMeters'),
age: t.int({
resolve: (parent) => {
// Do some date math to get an approximate age from a birthday
const ageDifMs = Date.now() - parent.birthday.getTime();
const ageDate = new Date(ageDifMs); // milliseconds from epoch
return Math.abs(ageDate.getUTCFullYear() - 1970);
},
}),
}),
});
```
You'll notice that we haven't added any additional typescript definitions when defining our fields.
Pothos will uses the type provided to `objectRef` to ensure that the fields we add to the Giraffe
type are type-safe. This type is only used to ensure that the implementation is type-safe, but
Pothos we never automatically expose properties from the underlying data without an explicit field
definition.
In the example above, we have examples of "exposing" data from the underlying type, ad well as field
that requires some additional logic to resolve.
## Add a query
We can create a root `Query` object with a field that returns a giraffe using `builder.queryType`
```typescript
builder.queryType({
fields: (t) => ({
giraffe: t.field({
type: GiraffeRef,
resolve: () => ({
name: 'James',
birthday: new Date(Date.UTC(2012, 11, 12)),
heightInMeters: 5.2,
}),
}),
}),
});
```
We can use the `ObjectRef` created earlier as the `type` option when defining fields that return the
Giraffe type.
### Create a server
Pothos schemas build into a plain schema that uses types from the `graphql` package. This means it
should be compatible with most of the popular GraphQL server implementations for node. In this guide
we will use `graphql-yoga` but you can use whatever server you want.
```typescript
import { createServer } from 'http';
import { createYoga } from 'graphql-yoga';
const yoga = createYoga({
schema: builder.toSchema(),
context: (ctx) => ({
user: { id: Number.parseInt(ctx.request.headers.get('x-user-id') ?? '1', 10) },
}),
});
export const server = createServer(yoga);
server.listen(3000);
// Build schema and start server with the types we wrote above
const server = createServer({
schema: builder.toSchema(),
});
server.start();
```
### Query your data
1. Run your server (either with `ts-node`) by compiling your code and running it with node.
2. Open [http://0.0.0.0:3000/graphql](http://0.0.0.0:3000/graphql) to open the playground and query
your API:
```graphql
query {
giraffe {
name
age
height
}
}
```
## Different ways to define Object types
There are many different ways that you can provide type information to Pothos about what the
underlying data in your graph will be. Depending on how the rest of your application is structured
you can pick the approach that works best for you, or use a combination of different styles.
### Using Refs
ObjectRefs (the method shown above) is the most flexible solution, and makes it easy to integrate
pothos with data sources that have their own Typescript types.
Object refs can be created using `builder.objectRef`, and then implemented by calling the
`implement` method on the ref, or by passing the ref to `builder.objectType`:
```typescript
const GiraffeRef = builder.objectRef('Giraffe').implement({
description: 'Long necks, cool patterns, taller than you.',
fields: (t) => ({}),
});
```
import { Callout } from 'fumadocs-ui/components/callout';
When using objectRefs with circular dependencies, ensure that the `implement` method is called as
a separate statement, or typescript may complain about circular references:
### Using classes
If your data is already represented as a class, Pothos supports using the classes themselves as
ObjectRefs. This allows you to define a type-safe schema with minimal typescript definitions.
```typescript
export class Giraffe {
name: string;
birthday: Date;
heightInMeters: number;
constructor(name: string, birthday: Date, heightInMeters: number) {
this.name = name;
this.birthday = birthday;
this.heightInMeters = heightInMeters;
}
}
builder.objectType(Giraffe, {
// Name is required when using a class as an ObjectRef
name: 'Giraffe',
description: 'Long necks, cool patterns, taller than you.',
fields: (t) => ({}),
});
builder.queryFields((t) => ({
giraffe: t.field({
type: Giraffe,
resolve: () => new Giraffe('James', new Date(Date.UTC(2012, 11, 12)), 5.2),
}),
}));
```
### Using SchemaTypes
You can also provide type mappings when you create the [SchemaBuilder](./schema-builder), which
allows you to reference the types by name throughout your schema (as a string).
```typescript
const builder = new SchemaBuilder<{ Objects: { Giraffe: GiraffeType } }>({});
builder.objectType('Giraffe', {
description: 'Long necks, cool patterns, taller than you.',
fields: (t) => ({}),
});
builder.queryFields((t) => ({
giraffe: t.field({
type: 'Giraffe',
resolve: () => ({
name: 'James',
birthday: new Date(Date.UTC(2012, 11, 12)),
heightInMeters: 5.2,
}),
}),
}));
```
This is ideal when you want to list out all the types for your schema in one place, or you have
interfaces/types that define your data rather than classes, and means you won't have to import
anything when referencing the object type in other parts of the schema.
The type signature for [SchemaBuilder](../api/schema-builder) is described in more detail
[later](../api/schema-builder), for now, it is enough to know that the `Objects` type provided to
the schema builder allows you to map the names of object types to type definitions that describe the
data for those types.
# Patterns
URL: /docs/guide/patterns
Guide for using common patterns in Pothos
***
title: Patterns
description: Guide for using common patterns in Pothos
------------------------------------------------------
## Sharing fields between types
If you have common fields or arguments that are shared across multiple types (but you don't want to
use an interface to share the common logic) you can write helper functions to generate these fields
for you.
### Objects and Interfaces
```typescript
import { ObjectRef } from '@pothos/core';
import builder from './builder';
function addCommonFields(refs: ObjectRef[]) {
for (const ref of refs) {
builder.objectFields(ref, (t) => ({
id: t.exposeID('id', {}),
idLength: t.int({
resolve: (parent) => parent.id.length,
}),
}));
}
}
const WithCommonFields1 = builder.objectRef<{ id: string }>('WithCommonFields1').implement({});
const WithCommonFields2 = builder.objectRef<{ id: string }>('WithCommonFields2').implement({});
addCommonFields([WithCommonFields1, WithCommonFields2]);
```
This will apply the `id` and `idLength` fields to both of the object types. The `ObjectRef` type is
what is returned when creating an object (or when calling `builder.objectRef`). It takes 2 generic
parameters: The first is the shape a resolver is expected to resolve to for that type, and the
second is the shape of the parent arg when defining a field on that type. These 2 are generally the
same, but can differ for some special cases (like with `loadableObject` from the dataloader plugin,
which allows resolvers to resolve to an `ID` rather than the actual object). In this case, we only
care about the second parameter since we are defining fields.
If you want to define fields on an interface, you can use `InterfaceRef` instead. If your helper
accepts both, you can differentiate the refs by using `ref.kind` which will be either `Object` or
`Interface`.
### Args
Args are a little more complicated fields on objects and interfaces. Pothos infers the shape of args
for your resolvers, so you can't just add on more args later. Instead, we can define a helper that
returns a set of args to apply to your field. To make this work, we need to get a few extra types:
```typescript
import { ArgBuilder, InputFieldBuilder } from '@pothos/core';
export interface SchemaTypes {
Scalars: {
ID: {
Input: string;
Output: string;
};
};
}
export type TypesWithDefaults = PothosSchemaTypes.ExtendDefaultTypes;
const builder = new SchemaBuilder({});
function createCommonArgs(arg: ArgBuilder) {
return {
id: arg.id({}),
reason: arg({ type: 'String', required: false }),
};
}
builder.mutationType({
fields: (t) => ({
mutation1: t.boolean({
args: {
...createCommonArgs(t.arg),
},
resolve: (parent, args) => !!args.reason,
}),
mutation2: t.boolean({
args: {
...createCommonArgs(t.arg),
},
resolve: (parent, args) => !!args.reason,
}),
}),
});
```
In this example `SchemaTypes` are the types that will be provided to the builder when it is created.
Internally Pothos extends these with some default types. This extended set of types is what gets
passed around in many of Pothos's internal types. To correctly type our helper function, we need to
create a version of `SchemaTypes` with the same defaults Pothos adds in (`TypesWithDefaults`). Once
we have `TypesWithDefaults` we can define a helper function that accepts an arg builder
(`ArgBuilder`) and creates a set of arguments.
The last step is to call your helper with `t.arg` (the arg builder), and spread the returned args
into the args object for the current field.
### Input fields
Input fields are similar to args, and also all need to be present when the type is defined so that
Pothos can infer the correct types.
```typescript
import { InputFieldBuilder } from '@pothos/core';
import builder, { TypesWithDefault } from './builder';
function createInputFields(t: InputFieldBuilder) {
return {
id: t.id({}),
reason: t.field({ type: 'String', required: false }),
};
}
builder.inputType('InputWithCommonFields1', {
fields: (t) => ({
...createInputFields(t),
}),
});
builder.inputType('InputWithCommonFields2', {
fields: (t) => ({
...createInputFields(t),
}),
});
```
# Printing Schemas
URL: /docs/guide/printing-schemas
Guide for printing a Pothos schema to an SDL schema file
***
title: Printing Schemas
description: Guide for printing a Pothos schema to an SDL schema file
---------------------------------------------------------------------
Sometimes it's useful to have an SDL version of your schema. To do this, you can use some tools from
the `graphql` package to write your schema out as SDL to a file.
```typescript
import { writeFileSync } from 'fs';
import { printSchema, lexicographicSortSchema } from 'graphql';
import SchemaBuilder from '@pothos/core';
const builder = new SchemaBuilder({});
builder.queryType({
fields: (t) => ({
hello: t.string({
args: {
name: t.arg.string(),
},
resolve: (parent, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
const schema = builder.toSchema();
const schemaAsString = printSchema(lexicographicSortSchema(schema));
writeFileSync('/path/to/schema.graphql', schemaAsString);
```
## Using graphql-code-generator
An alternative to printing your schema directly is to generate your schema file using
graphql-code-generator.
You can add the `schema-ast` plugin to have graphql-code-generator generate your schema file for
you.
See [Generating Client Types](./generating-client-types) for more details
# Queries, Mutations and Subscriptions
URL: /docs/guide/queries-mutations-and-subscriptions
Guide for adding queries, mutations and subscriptions to your schema
***
title: Queries, Mutations and Subscriptions
description: Guide for adding queries, mutations and subscriptions to your schema
---------------------------------------------------------------------------------
There are a few different ways to add queries to your schema. The simplest way is to define a
`Query` type with your query fields using the `builder.queryType()` method.
```typescript
builder.queryType({
fields: (t) => ({
// Add query for a simple scalar type
hello: t.string({
resolve: () => 'hello, world!',
}),
// Add a query for an object type
giraffe: t.field({
type: Giraffe,
resolve: () => ({
name: 'James',
}),
}),
// Add a query for a list of objects
giraffes: t.field({
type: [Giraffe],
resolve: () => [
{
name: 'James',
},
],
}),
}),
});
const Giraffe = builder.objectRef<{ name: string }>('Giraffe');
Giraffe.implement({
fields: (t) => ({
name: t.exposeString('name'),
}),
});
```
You can only use `builder.queryType()` once in your schema, because it is responsible for defining
the `Query` type itself. If you want to split up your queries and add query fields individually, you
can use the `builder.queryField()` method to add individual query fields to the `Query` type.
```typescript
// You will still need to define the `Query` type somewhere in your schema to add individual query fields
builder.queryType({});
builder.queryField('hello', (t) =>
t.string({
resolve: () => 'hello, world!',
}),
);
builder.queryField('giraffe', (t) =>
t.field({
type: Giraffe,
resolve: () => ({
name: 'James',
}),
}),
);
```
If you want to add multiple query fields at once, you can use the `builder.queryFields()` method.
```typescript
builder.queryFields((t) => ({
hello: t.string({
resolve: () => 'hello, world!',
}),
giraffe: t.field({
type: Giraffe,
resolve: () => ({
name: 'James',
}),
}),
}));
```
# Mutations
Mutations work just like queries, and you can use the `builder.mutationType()`,
`builder.mutationField()`, and `builder.mutationFields()` methods to add mutations to your schema.
```typescript
builder.mutationType({
fields: (t) => ({
// Add mutation that returns a simple boolean
post: t.boolean({
args: {
message: t.arg.string(),
},
resolve: async (root, args) => {
// Do something with the message
const success = await messageClient.postMessage(args.message);
return success;
},
}),
}),
});
builder.mutationField('createGiraffe', (t) =>
t.field({
type: Giraffe,
args: {
name: t.arg.string(),
},
resolve: async (root, args) => {
const giraffe = {
name: args.name,
};
await db.giraffes.create(giraffe);
return giraffe;
},
}),
);
```
# Subscriptions
Subscriptions too work just like queries and mutations where you can use the `builder.subscriptionType()`,
`builder.subscriptionField()`, and `builder.subscriptionFields()` methods to add subscriptions to your
schema.
```typescript
builder.mutationType({
fields: (t) => ({
incrementCount: t.int({
resolve: (_parent, _args, ctx) => {
ctx.count.value += 1;
ctx.pubSub.publish('COUNT_INCREMENT', ctx.count.value);
return ctx.count.value;
},
}),
}),
});
builder.subscriptionType({
fields: (t) => ({
incrementedCount: t.int({
subscribe: (_parent, _args, ctx) => ctx.pubSub.subscribe('COUNT_INCREMENT'),
resolve: (count) => count,
}),
}),
});
```
import { Callout } from 'fumadocs-ui/components/callout';
Ensure that the `subscribe` function is always defined before the `resolve` function,
otherwise you may run into issues with the resolver arguments not being typed correctly.
# Scalars
URL: /docs/guide/scalars
Guide for defining Scalar types in Pothos
***
title: Scalars
description: Guide for defining Scalar types in Pothos
------------------------------------------------------
## Adding Custom GraphQL Scalars
To add a custom scalar that has been implemented as GraphQLScalar from
[graphql-js](https://github.com/graphql/graphql-js) you need to provide some type information in
SchemaTypes generic parameter of the builder:
```typescript
const builder = new SchemaBuilder<{
Scalars: {
Date: {
Input: Date;
Output: Date;
};
};
}>({});
builder.addScalarType('Date', CustomDateScalar);
```
The Input type is the type that will be used when the type is used in an argument or `InputObject`.
The Output type is used to validate the resolvers return the correct value when using the scalar in
their return type.
For many scalars `Input` and `Output` will be the same, but they do not always need to match. The
Scalars generic can be used to change types for the built-in scalars.
For example, the defaults for the ID scalar might not be exactly what you want, you can customize
the values like so:
```typescript
const builder = new SchemaBuilder<{
Scalars: {
ID: {
// type all ID arguments and input values as string
Input: string;
// Allow resolvers for ID fields to return strings, numbers, or bigints
Output: string | number | bigint;
};
};
}>({});
```
## Adding Scalars from `graphql-scalars`
Similarly to adding your own custom scalars, you can utilize scalars from the
[graphql-scalars](https://the-guild.dev/graphql/scalars/docs) library by also providing the types
through the SchemaTypes generic parameter.
Note that when implementing the graphql-scalars library, the best types to use for `Input` and
`Output` types are *not* always intuitive. For example, you might assume that the `JSON` type from
graphql-scalars would utilize the global `JSON` type, or another JSON type imported from a library
that tries to enumerate potential JSON values, but it is usually better to just use `unknown`. A
good place to start if you are unsure what type to use it the check the `codegenScalarType` inside
file where the scalar is defined by `graphql-scalars`
([BigInt scalar definition, for reference](https://github.com/Urigo/graphql-scalars/blob/6bdccebb27a7f9be7b5d01dfb052a3e9c17432fc/src/scalars/BigInt.ts#L92)).
This isn't defined for all scalars, and some scalars use `any` in which case `unknown` might be a
better option.
```typescript
import { DateResolver, JSONResolver } from 'graphql-scalars';
const builder = new SchemaBuilder<{
Scalars: {
JSON: {
Input: unknown;
Output: unknown;
};
Date: {
Input: Date;
Output: Date;
};
};
}>({});
builder.addScalarType('JSON', JSONResolver);
builder.addScalarType('Date', DateResolver);
```
## Defining your own scalars
```typescript
const builder = new SchemaBuilder<{
Scalars: {
PositiveInt: {
Input: number;
Output: number;
};
};
}>({});
builder.scalarType('PositiveInt', {
serialize: (n) => n,
parseValue: (n) => {
if (n >= 0) {
return n;
}
throw new Error('Value must be positive');
},
});
```
## Using scalars
```typescript
builder.queryFields((t) => ({
date: t.field({
type: 'Date',
resolve: () => new Date(),
}),
positive: t.field({
type: 'PositiveInt',
resolve: () => 5,
}),
}));
```
# SchemaBuilder
URL: /docs/guide/schema-builder
Guide for creating and using a Pothos SchemaBuilder
***
title: SchemaBuilder
description: Guide for creating and using a Pothos SchemaBuilder
----------------------------------------------------------------
The schema builder is the core of Pothos. It is used to create types, and then stitch those types
into a GraphQL schema.
## Creating a Schema Builder
The SchemaBuilder takes a generic type parameter that extends a Partial `SchemaTypes`.
```typescript
import SchemaBuilder from '@pothos/core';
const builder = new SchemaBuilder<{
// Type of the context object
Context: {};
}>({
// plugins may add options that can be provided here
});
```
The types provided here are used to enforce the types in resolvers, both for resolver arguments and
return values, but not all types need to be added to this SchemaTypes object. As described in the
[Object guide](./objects) there are a number of different ways to provide type information for a
Pothos type.
## Backing models
Pothos is built around a concept of "backing models". This may be a little confusing at first, but
once you get your head around it can be very powerful. When you implement a GraphQL schema, you
really have 2 schemas. The obvious schema is your GraphQL schema and it is made up of the types you
define with the schema builder. The second schema is the schema that describes your internal data,
and the contracts between your resolvers. The types that describe your data in your application will
be different from the types described in your GraphQL for a number of reasons. The primitive types
in typescript and GraphQL do not map cleanly to each other, so there will always be some translation
between the types you have in your application, and the types that are defined in your GraphQL
schema. This is part of the issue, but is not the full story. When mapping a model or object in your
application to a type in your API some fields may match up directly, some fields may need to be
loaded or transformed dynamically when requested, and others you may not want to expose at all.
These differences are why Pothos maintains a mapping of "backing models" (typescript types) to
GraphQL types.
To put it simply, backing models are the types that describe the data as it flows through your
application, which may be substantially different than the types described in your GraphQL schema.
Each object and interface type in your schema has a backing model that is tied to a typescript type
that describes your data. These types are how Pothos types the parent argument and return type of
your resolver functions (among other things).
Now that we covered what backing models are, let's go over where they come from. There are currently
3 ways that Pothos gets a backing model for an object or interface:
1. Classes: If you use classes when defining your object types, Pothos can infer that any field that
resolves to that type should resolve to an instance of that class
2. TypeRefs: Every time you create a type with the schema builder, it returns a `TypeRef` object,
which contains the backing model type for that type. Object Refs can also be created explicitly
with the `builder.objectRef` method.
3. SchemaTypes: The `SchemaTypes` type that is passed into the generic parameter of the
`SchemaBuilder` can also be used to provide backing models for various types. When you reference
a type in your schema by name (as a string), Pothos checks the SchemaTypes to see if there is a
backing model defined for that type.
# Troubleshooting
URL: /docs/guide/troubleshooting
Guide for troubleshooting common Pothos issues
***
title: Troubleshooting
description: Guide for troubleshooting common Pothos issues
-----------------------------------------------------------
Common problems and troubleshooting steps.
This document is currently very incomplete, if you run into issues and find a useful solution,
please feel free to add any tips here
## Type issues
1. Ensure that typescript is using `strict` mode
2. Move your builder types to a separate named interface
* This will make many type errors significantly more readable
```typescript
interface PothosTypes {
Context: {
user: {
id: string;
};
};
}
const builder = new SchemaBuilder({...});
```
## Slow vscode or typescript performance
1. Ensure you are not including any very complex objects in your `Context` type. See
[https://github.com/microsoft/TypeScript/issues/45405](https://github.com/microsoft/TypeScript/issues/45405)
## Runtime issues
### Plugin methods are not defined
Ensure that there is only 1 version of each pothos package, and that they are both in the same root
node\_modules directly. Pothos plugins import classes from `@pothos/core` to add plugin specific
methods to the class prototypes
### Recieved multiple implementations for plugin
By default, Pothos doesn't allow multiple plugin registrations with the same name. During
development, it can be helpful to disable this check by setting
`SchemaBuilder.allowPluginReRegistration = true`.
Keep in mind that this not only allows plugins to be updated when for example, using HMR but can
also lead to unexpected behavior with plugins using the same name.
### Refs is undefined
If you are running into issues with refs being undefined, it may be due to a circular import. Most
circular imports will work correctly with pothos as long as the following conditions are true:
1. The `builder` is defined in a file that does not import any files that use the builder (or
indirectly import it).
2. `builder.toSchema()` is called in a file that is not imported by any files that use the builder.
This is generally done by having a simple `builder.ts` that initializes the builder to export. This
file can also define some core parts of your schema (the query object, scalers etc). The rest of the
schema can then import the builder from `builder.ts`. A `schema.ts` file can then import all files
that define parts of the schema. `schema.ts` can then call `builder.toSchema()` and export the
result for use by the server.
# Unions
URL: /docs/guide/unions
Guide for defining Union types in Pothos
***
title: Unions
description: Guide for defining Union types in Pothos
-----------------------------------------------------
Union types are defined with a list of object types:
```typescript
const builder = new SchemaBuilder<{
Objects: {
GiraffeStringFact: { factKind: 'string'; fact: string };
GiraffeNumericFact: { factKind: 'number'; fact: string; value: number };
};
}>({});
builder.objectType('GiraffeStringFact', {
fields: (t) => ({
fact: t.exposeString('fact', {}),
}),
});
const GiraffeNumericFact = builder.objectType('GiraffeNumericFact', {
fields: (t) => ({
fact: t.exposeString('fact', {}),
value: t.exposeFloat('value', {}),
}),
});
const GiraffeFact = builder.unionType('GiraffeFact', {
types: ['GiraffeStringFact', GiraffeNumericFact],
resolveType: (fact) => {
switch (fact.factKind) {
case 'number':
return GiraffeNumericFact;
case 'string':
return 'GiraffeStringFact';
}
},
});
```
The `types` array can either contain Object type names defined in SchemaTypes, or and Object `Ref`
created by object type. `builder.objectType`, `builder.objectRef` or other method, or a class that
was used to implement an object type.
The `resolveType` function will be called with each item returned by a field that returns the
unionType, and is used to determine which concrete the value corresponds to. It is usually good to
have a shared property you can use to differentiate your union members.
## Using Union Types
```typescript
builder.queryField('giraffeFacts', (t) =>
t.field({
type: [GiraffeFact],
resolve: () => {
const fact1 = {
factKind: 'string',
fact: 'A giraffe’s spots are much like human fingerprints. No two individual giraffes have exactly the same pattern',
} as const;
const fact2 = {
factKind: 'number',
fact: 'Top speed (MPH)',
value: 35,
} as const;
return [fact1, fact2];
},
}),
);
```
# Using plugins
URL: /docs/guide/using-plugins
Guide for using plugins with Pothos
***
title: Using plugins
description: Guide for using plugins with Pothos
------------------------------------------------
Using plugins with Pothos is fairly easy, but works a little differently than other plugin systems
you may be familiar with. One of the most important things to note is that importing plugins may
have some side effects on the Schema builder, and it is recommended to only import the plugins you
are actually using.
The reason for this is that Pothos's plugin system was designed to allow plugins to contribute
features in a way that feels like they are built into the core API, and allow the plugins to take
full advantage of the type system. This means that plugins can extend the core types in Pothos with
their own properties, which happens as soon as the plugin is imported.
## Setup
Each plugin should have setup instructions, but should work in a similar way.
First install the plugin:
```package-install
npm install --save @pothos/plugin-scope-auth
```
Next import the plugin's default export (which should just be the name of the plugin), and pass it
when you create your schema builder.
```typescript
import SchemaBuilder from '@pothos/core';
import ScopeAuthPlugin from '@pothos/plugin-scope-auth';
const builder = new SchemaBuilder({
plugins: [ScopeAuthPlugin],
});
```
Some plugins may allow you to use your own types for one of their features. This is done by passing
types in through the Generic SchemaTypes used by the Schema builder:
```typescript
import SchemaBuilder from '@pothos/core';
import ScopeAuthPlugin from '@pothos/plugin-scope-auth';
const builder = new SchemaBuilder<{
AuthScopes: {
example: string;
};
}>({
plugins: [ScopeAuthPlugin],
});
```
This types can then be used in other parts of the API (eg. defining the scopes on a field), but
the details of how these types are used will be specific to each plugin, and should be covered in
the documentation for the plugin.
## Ordering
In some cases, it may be important to understand the order in which plugins are applied. All plugin
lifecycle hooks are applied in REVERSE order. This is done to ensure that the most important
(first) plugins are applied after all other effects have been applied. For plugins that wrap
resolvers, because the first plugins are applied last, they will be the outermost layer of wrapping
an applied executed first. This means it is important to have plugins like `scope-auth` listed
before other less critical plugins in your SchemaBuilder.
# Writing plugins
URL: /docs/guide/writing-plugins
Guide for defining Enum types in Pothos
***
title: Writing plugins
description: Guide for defining Enum types in Pothos
----------------------------------------------------
Writing plugins for Pothos may seem a little intimidating at first, because the types used by Pothos
are fairly complex. Fortunately, for many types of plugins, the process is actually pretty easy,
once you understand the core concepts of how Pothos's type system works. Don't worry if the
descriptions don't make complete sense at first. Going through the examples in this guide will
hopefully make things seem a lot easier. This guide aims to cover a lot of the most common use cases
for creating plugins, but does not contain full API documentation. Exploring the types or source
code to see what all is available is highly encouraged, but should not be required for most use
cases.
## The type system
Pothos has 2 main pieces to it's type system:
1. `PothosSchemaTypes`: A global namespace for shared types
2. `SchemaTypes`: A collection of types passed around through Generics specific to each instance of
`SchemaBuilder`
### `PothosSchemaTypes`
The `PothosSchemaTypes` contains interfaces for all the various options objects used throughout the
API, along with some other types that plugins may want to extend. Each of the interfaces can be
extended by a plugin to add new options. Each interface takes a number of relevant generic
parameters that can be used to make the options more useful. For example, the interface for field
options will be passed the shape of the parent, the expected return type, and any arguments.
### `SchemaTypes`
The `SchemaTypes` type is based on the Generic argument passed to the `SchemaBuilder`, and extended
with reasonable defaults. Almost every interface in the `PothosSchemaTypes` will have access to it
(look for `Types extends SchemaTypes` in the generics of almost any interface). This Type contains
the types for Scalars, backing models for some object and interface types, and many custom
properties from various plugins. If your plugin needs the user to provide some types that will be
shared across the whole schema, this is how you will be able to access them when adding fields to
the options objects defined in `PothosSchemaTypes`.
## Getting Started
The best place to start is by looking through the
[example plugin](https://github.com/hayes/pothos/tree/main/packages/plugin-example).
The general structure of a plugin has 3 main parts:
1. `index.ts` which contains a plugins actual implementation
2. `global-types.ts` which contains any additions to `Pothos`s built in types.
3. `types.ts` which should contain any types that do NOT belong to the global `PothosSchemaTypes`
namespace.
To get set up quickly, you can copy these files from the example plugin to suit your needs. The
first few things to change are:
1. The plugin name in `index.ts`
2. The name of the Plugin class in `index.ts`
3. The name key/name for the plugin in the `Plugins` interface in `global-types.ts`
After setting up the basic layout of your plugin, I recommend starting by defining the types for
your plugin first (in `global-types.ts`) and setting up a test schema that uses your plugin. This
allows you to get the user facing API for your plugin working first, so you can see that any new
options you add to the API are working as expected, and that any type constraints are enforced
correctly. Once you are happy with your API, you can start building out the functionality in
index.ts. Building the types first also make the implementation easier because the properties you
will need to access in your extension may not exist on the config objects until you have defined
your types.
### `global-types.ts`
`global-types.ts` must contain the following:
1. A declaration of the `PothosSchemaTypes` namespace
```typescript
declare global {
export namespace PothosSchemaTypes {}
}
```
2. An addition to the `Plugins` interface that maps the plugin name, to the plugin type (this needs
to be inside the `PothosSchemaTypes` namespace)
```typescript
export interface Plugins {
example: PothosExamplePlugin;
}
```
`global-types.ts` should NOT include definitions that do not belong to the `PothosSchemaTypes`
namespace. Types for your plugin should be added to a separate `types.ts` file, and imported as
needed into `global-types.ts`.
To add properties to the various config objects used by the `SchemaBuilder`, you should start by
finding the interface that defines that config object in `@pothos/core`. Currently there are 4 main
file that define the types that make up `PothosSchemaTypes` namespace.
1. [`type-options.ts`](https://github.com/hayes/pothos/blob/main/packages/core/src/types/global/type-options.ts):
Contains the interfaces that define the options objects for the various types (Object,
Interface, Enum, etc).
2. [`field-options.ts`](https://github.com/hayes/pothos/blob/main/packages/core/src/types/global/field-options.ts):
Contains the interfaces that define the options objects for creating fields
3. [`schema-types.ts`](https://github.com/hayes/pothos/blob/main/packages/core/src/types/global/schema-types.ts):
Contains the interfaces for SchemaBuilder options, SchemaTypes, options for `toSchema`, and other
utility interfaces that may be useful for plugins to extend that do not fall into one of the
other categories.
4. [`classes.ts`](https://github.com/hayes/pothos/blob/main/packages/core/src/types/global/classes.ts):
Contains interfaces that describe the classes used by Pothos, include `SchemaBuilder` and the
various field builder classes.
Once you have identified a type you wish to extend, copy it into the `PothosSchemaTypes` namespace
in your `global-types.ts`, but remove all the existing properties. You will need to keep all the
Generics used by the interface, and should import the types used in generics from `@pothos/core`.
You can now add any new properties to the interface that your plugin needs. Making new properties
optional (`newProp?: TypeOfProp`) is recommended for most use cases.
## `index.ts`
`index.ts` must contain the following:
1. A bare import of the global types (`import './global-types';`)
2. The plugins name, which should be typed as a string literal rather than as a generic string:
`const pluginName = 'example'`
3. A default export of the plugin name `export default pluginName`
4. A class that extends BasePlugin:
`export class PothosExamplePlugin extends BasePlugin {}`
`BasePlugin` and `SchemaTypes` can both be imported from `@pothos/core`
5. A call to register the plugin: `SchemaBuilder.registerPlugin(pluginName, PothosExamplePlugin);`
`SchemaBuilder` can also be imported from `@pothos/core`
### Life cycle hooks
The `SchemaBuilder` will instantiate plugins each time the `toSchema` method is called on the
builder. As the schema is built, it will invoke the various life cycle methods on each plugin if
they have been defined.
To hook into each lifecycle event, simply define the corresponding function in your plugin class.
For the exact function signature, see the `index.ts` of the example plugin.
* `onTypeConfig`: Invoked for each type, with the config object that will be used to construct the
underlying GraphQL type.
* `onOutputFieldConfig`: Invoked for each Object, or Interface field, with the config object
describing the field.
* `onInputFieldConfig`: Invoked for each InputObject field, or field argument, with the config
object describing the field.
* `onEnumValueConfig`: Invoked for each value in an enum
* `beforeBuild`: Invoked before building schemas, last chance to add new types or fields.
* `afterBuild`: Invoked with the fully built Schema.
* `wrapResolve`: Invoked when creating the resolver for each field
* `wrapSubscribe`: Invoked for each field in the `Subscriptions` object.
* `wrapResolveType`: Invoked for each Union and Interface.
Each of the lifecycle methods above (except `beforeBuild`) expect a return value that matches
their first argument (either a config object, or the resolve/subscribe/resolveType function). If
your plugin does not need to modify these values, it can simple return the value that was passed in.
When your plugin does need to change one of the config values, you should return a copy of the
config object with your modifications, rather than modifying the config object that was passed in.
This can be done by either using `Object.assign`, or spreading the original config into a new object
`{...originalConfig, newProp: newValue }`.
Each config object will have the properties expected by the GraphQL for creating the types or fields
(although some properties like `resolve` will be added later), but will also include a number of
Pothos specific properties. These properties include `graphqlKind` to indicate what kind of GraphQL
type the config object is for, `pothosOptions`, which contains all the options passed in to the
schema builder when creating the type or field.
If your plugin needs to add additional types or fields to the schema it should do this in the
`beforeBuild` hook. Any types added to the schema after this, may not be included correctly. Plugins
should also account for the fact that a new instance of the plugin will be created each time the
schema is called, so any types or fields added the the schema should only be applied once (per
schema), even if multiple instances of the plugin are created. The help with this, there is a
`runUnique` helper on the base plugin class, which accepts a key, and a callback, and will only run
a callback once per schema for the given key.
## Use cases
Below are a few of the most common use cases for how a plugin might extend the Pothos with very
simplified examples. Most plugins will likely need a combination of these strategies, and some uses
cases may not be well documented. If you are unsure about how to solve a specific problem, feel free
to open a GitHub Issue for more help.
In the examples below, when "extending an interface", the interface should be added to the
`PothosSchemaTypes` namespace in `global-types.ts`.
### Adding options to the SchemaBuilder constructor
You may have noticed that plugins are not instantiated by the user, and therefore users can't pass
options directly into your plugin when creating it. Instead, the recommended way to configure your
plugin is by contributing new properties to the options object passed the the SchemaBuilder
constructor. This can be done by extending the `SchemaBuilderOptions` interface.
```typescript
export interface SchemaBuilderOptions {
optionInRootOfConfig?: boolean;
nestedOptionsObject?: ExamplePluginOptions; // imported from types.ts
}
```
Extending this interface will allow the user to pass in these new options when creating an instance
of `SchemaBuilder`.
You can then access the options through `this.builder.options` in your plugin, with everything
correctly typed:
```typescript
export class PothosExamplePlugin extends BasePlugin {
onTypeConfig(typeConfig: PothosTypeConfig) {
console.log(this.builder.options.optionInRootOfConfig)
return typeConfig;
}
```
### Adding options when building a schema (`toSchema`)
In some cases, your plugin may be designed for schemas that be built in different modes. For example
the mocks plugin allows the schema to be built repeatedly with different sets of mocks, or the
subGraph allows building a schema multiple times to generate separate subgraphs. For these cases,
you can extend the options passed to `toSchema` instead:
```typescript
export interface BuildSchemaOptions {
customBuildTimeOptions?: boolean;
}
```
These options can be accessed through `this.options` in your plugin:
```typescript
export class PothosExamplePlugin extends BasePlugin {
onTypeConfig(typeConfig: PothosTypeConfig) {
console.log(this.options.customBuildTimeOptions)
return typeConfig;
}
```
### Adding options to types
Each GraphQL type has it's own options interface which can be extended. For example, to extend the
options for creating an Object type:
```typescript
export interface ObjectTypeOptions {
optionOnObject?: boolean;
}
```
These options can then be accessed in your plugin when you receive the config for the type:
```typescript
export class PothosExamplePlugin extends BasePlugin {
onTypeConfig(typeConfig: PothosTypeConfig) {
if (typeConfig.kind === 'Object') {
console.log(typeConfig.pothosOptions.optionOnObject);
}
return typeConfig;
}
```
In the example above, we need to check `typeConfig.kind` to ensure that the type config is for an
object. Without this check, typescript will not know that the config object is for an object, and
will not let us access the property. `typeConfig.kind` corresponds to how Pothos splits up Types for
its config objects, meaning that it has separate `kind`s for `Query`, `Mutation`, and `Subscription`
even though these are all `Objects` in GraphQL terminology. The `typeConfig.graphqlKind` can be used
to get the actual GraphQL type instead.
### Adding options to fields
Similar to Types, fields also have a number of interfaces that can be extended to add options to
various types of fields:
```typescript
export interface MutationFieldOptions<
Types extends SchemaTypes,
Type extends TypeParam,
Nullable extends FieldNullability,
Args extends InputFieldMap,
ResolveReturnShape,
> {
customMutationFieldOption?: boolean;
}
```
Field interfaces have a few more generics than other interfaces we have looked at. These generics
can be used to make the options you add more specific to the field currently being defined. It is
important to copy all the generics of the interfaces as they are defined in `@pothos/core` even if
you do not use the generics in your own properties. If the generics do not match, typescript won't
be able to merge the definitions. You do NOT need to include the `extends` clause of the interface,
if the interface extends another interface (like `FieldOptions`).
Similar to Type options, Field options will be available in the fieldConfigs in your plugin, once
you check that the fieldConfig is for the correct `kind` of field.
```typescript
export class PothosExamplePlugin extends BasePlugin {
onOutputFieldConfig(fieldConfig: PothosOutputFieldConfig) {
if (fieldConfig.kind === 'Mutation') {
console.log(fieldConfig.pothosOptions.customMutationFieldOption);
}
return fieldConfig;
}
}
```
### Adding new methods on builder classes
Adding new method to `SchemaBuilder` or one of the `FieldBuilder` classes is also done through
extending interfaces. Extending these interfaces is how typescript is able to know these methods
exist, even though they are not defined on the original classes.
```typescript
export interface SchemaBuilder {
buildCustomObject: () => ObjectRef<{ custom: 'shape' }>;
}
```
The above is a simple example of defining a new `buildCustomObject` method that takes no arguments,
and returns a reference to a new custom object type. Defining this type will not work on it's own,
and we still need to define the actual implementation of this method. This might look like:
```typescript
const schemaBuilderProto = SchemaBuilder.prototype as PothosSchemaTypes.SchemaBuilder;
schemaBuilderProto.buildCustomObject = function buildCustomObject() {
return this.objectRef<{ custom: 'shape' }>('CustomObject').implement({
fields: () => ({}),
});
};
```
Note that the above function does NOT use an arrow function, so that the function can access `this`
as a reference the the SchemaBuilder instance.
### Wrapping resolvers to add runtime functionality
Some plugins will need to add runtime behavior. There are a few lifecycle hooks for wrapping
`resolve`, `subscribe`, and `resolveType`. These hooks will receive the function they are wrapping,
along with a config object for the field or type they are associated with, and should return either
the original function, or a wrapper function with the same API.
It is important to remember that resolvers can resolve values in a number of ways (normal values,
promises, or even something as complicated `Promise<(Promise | T)[]>`. So be careful when using a
wrapper that introspected the return value of a resolve function. Plugins should only wrap resolvers
when absolutely necessary.
```typescript
export class PothosExamplePlugin extends BasePlugin {
wrapResolve(
resolver: GraphQLFieldResolver,
fieldConfig: PothosOutputFieldConfig,
): GraphQLFieldResolver {
return (parent, args, context, info) => {
console.log(`Resolving ${info.parentType}.${info.fieldName}`);
return resolver(parent, args, context, info);
};
}
}
```
### Transforming a schema
For some plugins the other provided lifecycle may not be sufficiently powerful to modify the schema
in all the ways a plugin may want. For example removing types from the schema (eg. the `SubGraph`
plugin). In these cases, the `afterBuild` hook can be used. It receives the built schema, and is
expected to return either the schema it was passed, or a completely new schema. This allows plugins
to use 3rd party libraries like `graphql-tools` to arbitrarily transform schemas if desired.
### Using SchemaTypes
You may have noticed that almost every interface and type in `@pothos/core` take a generic that
looks like: `Types extends SchemaTypes`. This type is what allows Pothos and its plugins to share
type information across the entire schema, and to incorporate user defined types into that system.
These SchemaTypes are a combination of default types merged with the Types provided in the Generic
parameter of the SchemaBuilder constructor, and includes a wide variety of useful types:
* Types for all the scalars
* Types for backing models used by objects and interfaces when referenced via strings
* The type used for the context and root objects
* Settings for default nullability of fields
* Any user defined types specific to plugins (more info below)
There are many ways these types can be used, but one of the most common is to access the type for
the context object, so that you can correctly type a callback function for your plugin that accepts
the context object.
```typescript
export interface SchemaBuilderOptions {
exampleSetupFn?: (context: Types['Context']) => ExamplePluginSetupConfig;
}
```
### Using user defined types
As mentioned above, your plugin can also contribute its own user definable types to the SchemaTypes
interface. You can see examples of this in the several of the plugins including the directives and
`scope-auth` plugins. Adding your own types to SchemaTypes requires extending 2 interfaces: The
`UserSchemaTypes` which describes the user provided type will need to extend, and the
`ExtendDefaultTypes` interface, which is used to set default values if the User does not provide
their own types.
```typescript
export interface UserSchemaTypes {
NewExampleTypes: Record;
}
export interface ExtendDefaultTypes> {
NewExampleTypes: PartialTypes['NewExampleTypes'] & {};
}
```
The User provided type can then be accessed using `Types['NewExampleTypes']` in any interface or
type that receive `SchemaTypes` as a generic argument.
### Request data
Plugins that wrap resolvers may need to store some data that us unique the current request. In these
cases your plugin can define a `createRequestData` method, and use the `requestData` method to get
the data for the current request.
```typescript
export class PothosExamplePlugin extends BasePlugin {
createRequestData(context: Types['Context']): T {
return { resolveCount: 0 };
}
wrapResolve(
resolver: GraphQLFieldResolver,
fieldConfig: PothosOutputFieldConfig,
): GraphQLFieldResolver {
return (parent, args, context, info) => {
const requestData = this.requestData(context);
requestData.resolveCount += 1;
console.log(`request has resolved ${requestData.resolveCount} fields`);
return resolver(parent, args, context, info);
};
}
}
```
The shape of requestData can be defined via the second generic parameter of the `BasePlugin` class.
The `requestData` method expects the context object as its only argument, which is used to uniquely
identify the current request.
### Wrapping arguments and inputs
The plugin API does not directly have a method for wrapping input fields, instead, the `wrapResolve`
and `wrapSubscribe` methods can be used to modify the `args` object before passing it down to the
original resolver.
Figuring out how to wrap inputs can be a little complex, especially when dealing with recursive
inputs, and optimizing to wrap as little as possible. To help with this, Pothos has a couple of
utility functions that can make this easier:
* `mapInputFields`: Used to select affected input fields and extract some configuration
* `createInputValueMapper`: Creates a mapping function that uses the result of `mapInputFields` to
map inputs in an args object to new values.
The relay plugin uses these methods to decode `globalID` inputs:
```typescript
export class PothosRelayPlugin extends BasePlugin {
// Optionally create a cache for input mappings so that mappings can be reused across multiple fields
// Be sure to only provide a mapping cache if your argument mappings are not specific to the current field
private mappingCache = new Map>();
wrapResolve(
resolver: GraphQLFieldResolver,
fieldConfig: PothosOutputFieldConfig,
): GraphQLFieldResolver {
// Given the args for the this field, select the fields that are globalIds
const argMappings = mapInputFields(fieldConfig.args, this.buildCache, (inputField) => {
if (inputField.extensions?.isRelayGlobalID) {
return true;
}
// returning null means no mapping will be created for this input field
return null;
}, this.mappingCache);
// If all fields reachable through args return null for their mapping, we don't need to wrap the resolver
if (!argMappings) {
return resolver;
}
// Calls the mapping function for each value with a mapping if the value is not null or undefined
const argMapper = createInputValueMapper(argMappings, (globalID, mapping) =>
internalDecodeGlobalID(this.builder, String(globalID)),
);
return (parent, args, context, info) => resolver(parent, argMapper(args), context, info);
}
}
```
Using these utilities allows moving more logic to build time (figuring out which fields need
mapping) so that the runtime overhead is as small as possible.
`createInputValueMapper` may be useful for some use cases, for some plugins it may be better to
create a custom mapping function, but still use the result of `mapInputFields`.
`mapInputFields` returns a map who's keys are field/argument names, and who's values are objects
with the following shape:
```typescript
interface InputFieldMapping {
kind: 'Enum' | 'Scalar' | 'InputObject';
isList: boolean;
config: PothosInputFieldConfig;
value: T; // the value returned by the mapping function (if it was not null).
// The value may still be for `InputObject` mappings if there are nested fields with non-null mappings
}
```
if the `kind` is `InputObject` then the mapping object will also have a fields property with an
object of the following shape:
```typescript
interface InputTypeFieldsMapping {
configs: Record>;
map: Map> | null;
}
```
Both the root level map, and the `fields.map` maps will only contain entries for fields where the
mapping function did not return null. If the mapping function returned null for all fields, the
`mapInputFields` will return null instead of returning a map to indicate no wrapping should occur
### Removing fields and enum values
Plugins can remove fields from objects, interfaces, and input objects, and remove specific values
from enums. To do this, simply return null from the corresponding on\*Config plugin hook:
```typescript
onOutputFieldConfig(fieldConfig: PothosOutputFieldConfig) {
if (fieldConfig.name === 'removeMe') {
return null;
}
return fieldConfig;
}
onInputFieldConfig(fieldConfig: PothosInputFieldConfig) {
if (fieldConfig.name === 'removeMe') {
return null;
}
return fieldConfig;
}
onEnumValueConfig(valueConfig: PothosEnumValueConfig) {
if (valueConfig.value === 'removeMe') {
return null;
}
return valueConfig;
}
```
Removing whole types from the schema needs to be done by transforming the schema during the
`afterBuild` hook. See the `sub-graph` plugin for a more complete example of removing types.
## Useful methods:
* `builder.configStore.onTypeConfig`: Takes a type ref and a callback, and will invoke the callback
with the config for the referenced type once available.
* `fieldRef.onFirstUse` Takes a callback to invoke once the config for the field is available.
* `buildCache.getTypeConfig` Gets the config for a given type after it has been passed through any
modifications applied by plugins.
# Giraphql to Pothos
URL: /docs/migrations/giraphql-pothos
Migration guide for upgrading from GiraphQL 2.* to Pothos 3.0
***
title: 'Giraphql to Pothos'
description: Migration guide for upgrading from GiraphQL 2.\* to Pothos 3.0
---------------------------------------------------------------------------
# Migrating from GiraphQL to Pothos
As of 3.0 GiraphQL has been renamed to Pothos. The primary motivation for this rename is to make
this library and associated projects, guides, and other content to be more discoverable. GiraphQL is
not visually distinct from GraphQL, and has often been interpreted as a typo. Search engines tend to
auto-correct the name to GraphQL, making it hard to search for.
## Changes for consumers of GiraphQL
* All packages have been moved from the `@giraphql/*` scope to `@pothos/*` scope.
* The `GiraphQLSchemaTypes` global typescript scope has been renamed to `PothosSchemaTypes`
* Exported types prefixed with `GiraphQL` have had that prefix replaced with `Pothos`
For the most part, the easiest way to upgrade is by doing a CASE SENSITIVE search and replace of
`giraphql` -> `pothos` and `GiraphQL` -> `Pothos`. The only non-documentation change between the
latest version of GiraphQL and the initial version of Pothos (`v3.0.0`) are renaming of types and
packages.
## Plugin specific changes
### Prisma plugin
* The generator/provider for prisma types has been renamed to `prisma-pothos-types`.
You will need to update your prisma schema to use the new provider:
```prisma
generator pothos {
provider = "prisma-pothos-types"
}
```
## For plugin authors
* Some `extensions` fields in the build schemas have been renamed. Specifically:
* `giraphQLOptions` has been renamed to `pothosOptions`
* `giraphQLConfig` has been renamed to `pothosConfig`
# Migrations
URL: /docs/migrations
List of Pothos migration guides
***
title: Migrations
description: List of Pothos migration guides
--------------------------------------------
* [3.\* to (4.0)](./migrations/v4)
* [GiraphQL (2.\*) to Pothos (3.0)](./migrations/giraphql-pothos)
* [1.\* to 2.0](./migrations/v2)
# Migration to Pothos from other GraphQL libraries
Official migration tools are currently a work in progress, and we are hoping to make incremental
migration from a number of common setups much easier in the near future. For now there are a few
tools that may be helpful while the official tooling for migrations is being developed.
* [Nexus to Pothos codemod](https://github.com/villesau/nexus-to-pothos-codemod)
This 3rd party code-mod aims to transform all the nexus types, queries and mutations to Pothos
equivalents. This codemod will still require some manual adjustments to get everything working
correctly, but can be a huge help in the migration process.
* [Pothos Generator](https://github.com/hayes/pothos/tree/main/packages/converter)
This is an undocumented CLI that can convert a schema into valid Pothos code. Resolvers are all
placeholders that throw errors, so this is not quite as useful as it sounds, but can be helpful,
especially for generating input types.
# v2.0
URL: /docs/migrations/v2
Migration guide for upgrading from GiraphQL 1.* to GiraphQL 2.0
***
title: 'v2.0'
description: Migration guide for upgrading from GiraphQL 1.\* to GiraphQL 2.0
-----------------------------------------------------------------------------
The 2.0 release was mostly focused around re-designing the plugin system so it could be properly
documented, and made available for broader adoption. The previous plugin system allowed plugins to
use the FieldWrapper base class to wrap fields. Unfortunately the overhead of this wrapping strategy
was significantly higher than expected, and could not be optimized in a way that justified the
conveniences it provided.
## Breaking changes
### Auth plugin
The auth plugin has been replaced by a new `scope-auth` plugin. Unfortunately due to the performance
problems with the original field wrapping API, the auth plugin had to be re-designed, and
maintaining the existing API at the cost of significant performance overhead did not seem justified.
Any existing usage of the `auth` plugin will need to be replaced with the new `scope-auth` plugin.
The API of the new `scope-auth` plugin is substantially different, and the specifics of the
migration will depend on the exact usage of the original auth plugin. Documentation on the new
plugin can be found [here](../plugins/scope-auth).
### Plugin names
Plugin names have been normalized, and are now exported as the default export of the plugin
packages.
Change:
```typescript
// old
import '@pothos/plugin-simple-objects';
const builder = new SchemaBuilder({
plugins: ['PothosSimpleObjects'],
});
// new
import SimpleObjectsPlugin from '@pothos/plugin-simple-objects';
const builder = new SchemaBuilder({
plugins: [SimpleObjectsPlugin],
});
```
### Plugin Order
The old plugin API did not make strong guarantees about the order in which plugin hooks would be
executed. Plugins are now always triggered in reverse order. The most critical plugins (like
`auth-scope`) should appear first in the list of plugins. This ensures that any modifications made
by other plugins are applied first, and lets the more important plugins be at the top of the call
stack when resolving fields.
### InputFieldBuilder.bool and InputFieldBuilder.boolList
The `bool` alias on `InputFieldBuilder` has been removed, as it was inconsistent with the other
field builders and general naming convention of other methods. Usage of this method should be
converted to the canonical `boolean` and `booleanList` methods.
Change:
```typescript
// Old
t.arg.bool({});
t.arg.boolList({});
// New
t.arg.boolean();
t.arg.booleanList();
```
### args on "exposed" fields
Fields defined with the `expose` helpers no longer accept `args` since they also do not have a
resolver.
### Plugin API
The Plugin API has been completely re-designed and is now
[documented here](../guide/writing-plugins). new instances of plugins are now instantiated each time
`toSchema` is called on the `SchemaBuilder`, rather than being tied to the lifetime of the
`SchemaBuilder` itself.
## New features
* Lots of new documentation
* New scope-auth plugin
* New directives plugin
* New plugin API
* Significant performance improvements in smart-subscriptions and scope-auth plugins
# v4.0
URL: /docs/migrations/v4
Migration guide for upgrading from Pothos 3.x to Pothos 4.0
***
title: 'v4.0'
description: Migration guide for upgrading from Pothos 3.x to Pothos 4.0
------------------------------------------------------------------------
## Overview
Migrating from Pothos 3.x to 4.0
The `4.0` release of Pothos is largely focused on updating 4 things:
1. Improving outdated defaults to be more consistent and aligned with best practices
2. Updating naming of some config options to be more consistent
3. Updating minimum versions of peer dependencies
4. Updating internal types to support some previously challenging plugin patterns
While the internals of Pothos have almost entirely been re-written, the public API surface should
have a minimal changes for most users. The first 2 sets of changes will cover the majority of
changes relevant to the majority of applications. To make the make the upgrade as simple as
possible, some options were added to maintain the defaults and option names from `3.x` which are
described in the simple upgrade section below.
## New minimum versions
* `typescript`: `5.0.2`
* `graphql`: `16.6.0`
* `node`: `18.0`
## Simple Upgrade (restore 3.0 options and defaults)
You can restore the 3.x defaults by adding the Defaults versions to both the SchemaTypes and the
builder options:
```ts
const builder = new SchemaBuilder<{
Defaults: 'v3';
}>({
defaults: 'v3',
});
```
This will restore all the defaults and config options from previous Pothos versions for both core
and plugins.
If you are using `@pothos/plugin-validation`, it has been renamed to `@pothos/plugin-zod`, and a new
validation plugin will be released in the future.
```diff
- import ValidationPlugin from '@pothos/plugin-validation';
+ import ZodPlugin from '@pothos/plugin-zod';
const builder = new SchemaBuilder({
- plugins: [ValidationPlugin],
+ plugins: [ZodPlugin],
});
```
## Manual update
There are a number of new defaults and changes to options for various plugins. To fully upgrade to
4.0 see the full list of breaking changes below:
# Breaking API Changes:
This section covers breaking API changes that can be automatically reverted by using the Simple
Upgrade process described above.
Changes to types and classes outside the main Pothos API are described in the next section. Those
changes will primarily affect other plugins and tools written for pothos, but may be relevant to
some type helpers you have created.
## `@pothos/core`
### Default field nullability
In previous versions of Pothos, fields were non-nullable by default. This is inconsistent with the
rest of the GraphQL ecosystem, so the default is being changed to make fields nullable by default.
To restore the previous behavior you can set the `defaultFieldNullability` option when creating your
builder:
```ts
export const builder = new SchemaBuilder<{
DefaultFieldNullability: false;
}>({
defaultFieldNullability: false,
});
```
Alternatively, fields can be updated to add `nullable: false` to the fields options.
### Default ID Scalar types
The default types for the built in `ID` Scalar has been changed to more closely match the behavior
of Javascript GraphQL server implementations:
```ts
interface IDType {
Input: string;
Output: number | string | bigint;
}
```
This will make working with IDs in arguments and input types easier by avoiding unnecessary type
checks to see if an `ID` is a `number` or `string`.
When returning an `ID` from a scalar you will be able to return a `string`, `number`, or `bigint`.
To restore the previous defaults you can customize the `ID` scalar types when creating your builder:
```ts
const builder = new SchemaBuilder<{
Scalars: {
ID: {
Input: number | string;
Output: number | string;
};
};
}>({});
```
## `@pothos/plugin-relay`
### Renamed options
The base relay plugin options have moved from `relayOptions` to `relay` to be more consistent with
options for other plugins.
```diff
const builder = new SchemaBuilder<{}>({
- relayOptions: {...}
+ relay: {...}
})
```
### New defaults
A number of the default values for relay options have changed:
* `clientMutationId`: Now defaults to `"omit"` and was previously `"required"`
* `clientMutationId` was only required in early versions of the relay client, and is no-longer
recommended.
* `cursorType`: Now defaults to `"String"` and was previously `"ID"`
* The previous defaults were inconsistent about the type of a cursor. Cursors generally should not
be treated as IDs as they are meant to indicate a position in a list, and may contain
information specific to other filters or arguments applied to the connection.
* `brandLoadedObjects`: Now defaults to `true` and was previously `false`
* This change will improve developer experience for most node implementations, as it removes the
need for `isTypeOf` to be defined for most nodes.
* `edgesFieldOptions.nullable`: Now defaults to
`{ list: options.defaultFieldNullability, items: true }` and was previously
`{ list: false, items: true }`
* `nodeFieldOptions.nullable`: Now defaults to `options.defaultFieldNullability` and was previously
`false`
* This new default is intended to align with the relay connection spec, which does not expect
connections to be NonNullable by default
To restore the previous defaults you can pass the old values when setting up the builder:
```ts
const builder = new SchemaBuilder<{
// To change edgesFieldOptions.nullable you must also update the type here
DefaultEdgesNullability: { list: false; items: true };
}>({
relay: {
clientMutationId: 'required',
brandLoadedObjects: false,
edgesFieldOptions: {
nullable: { list: false, items: true },
},
nodeFieldOptions: {
nullable: false,
},
cursorType: 'ID',
// the cursor fields on edges and pageInfo previously defaulted to `String`
// but will be overwritten by `cursorType` so you also need to explicity set them
edgeCursorType: 'String',
pageInfoCursorType: 'String',
// If you using the new v4 nullability defaults, you may need to change the nullability of mutation fields
relayMutationFieldOptions: {
nullable: false,
},
},
});
```
## `@pothos/plugin-prisma`
### Nullable relations
Previously the prisma would allow t.relation to define non-nullable fields using nullable relations.
The plugin option now requires an `onNull` option to handle null relations on NonNullable fields
To restore the previous behavior you can set the `onNull` option to `'error'`, which will result in
a runtime error when the field returns null
```diff
t.relation('nullableRelation', {
+ onNull: 'error',
})
```
Alternatively you can mark the field as nullable:
```diff
t.relation('nullableRelation', {
+ nullable: true,
})
```
`onNull` can also be set to a function that returns either a record matching the type of the
relation, or a custom Error to throw when the relation is null.
```ts
t.relation('nullableRelation', {
onNull: () => loadPlaceholder(),
});
```
## `@pothos/plugin-directives`
`useGraphQLToolsUnorderedDirectives` has been nested inside a `directives` options object:
```diff
const builder = new SchemaBuilder<{}>({
- useGraphQLToolsUnorderedDirectives: true
+ directives: {
+ useGraphQLToolsUnorderedDirectives: true
+ }
})
```
## `@pothos/plugin-errors`
### Renamed options
The base error plugin options have moved from `errorOptions` to `errors` to be more consistent with
options for other plugins.
```diff
const builder = new SchemaBuilder<{}>({
- errorOptions: {...}
+ errors: {...}
})
```
## `@pothos/plugin-scope-auth`
### Renamed options
The base scope-auth plugin options have moved from `scopeAuthOptions` to `scopeAuth` to be more
consistent with options for other plugins. The `authScopes` option has been moved to
`scopeAuth.authScopes` to keep all options for the plugin in one options object.
```diff
const builder = new SchemaBuilder<{}>({
- scopeAuthOptions: {...}
- authScopes: (ctx) => ({...})
+ scopeAuth: {
+ ...otherOptions,
+ authScopes: (ctx) => ({...})
+ }
})
```
## `@pothos/plugin-zod` (previously `@pothos/plugin-validation`)
### Renamed options
The base validation plugin options have moved from `validationOptions` to `validation` to be more
consistent with options for other plugins.
```diff
const builder = new SchemaBuilder<{}>({
- validationOptions: {...}
+ zod: {...}
})
```
## `@pothos/plugin-authz` has been removed
The `@pothos/plugin-authz` plugin has been removed, because the underlying `@graphql-authz/core` is
not actively maintained, and has left critical security vulnerabilities unaddressed.
# Plugin API and Type changes
Unlike the defaults and config changes, the changes to the types and classes used throughout Pothos
can't be easily made backwards compatibility with the 3.x releases. Below is a summary of the main
changes made to the types and classes that may be used by plugins, helpers, or other libraries. Many
of these types and classes are primarily intended for internal use, and should not affect most
applications using pothos, but the changes are documented here to help upgrades for those of you
building your own plugins, or using these types in your applications.
The 4.0 release is intended to allow pothos to become more modular and extensible. This requires
Refs and many associated type helpers to propagate the SchemaTypes from the builder that originated
them, meaning most of the changes listed below are adding `Types extends SchemaTypes` as the first
generic argument to the type.
## Classes
* `InputFieldBuilder`
* Removed the `typename` argument from the constructor
* Updated field methods to return a new `GenericInputRef`
* `InterfaceFieldBuilder`
* Removed the `typename` argument from the constructor
* `ObjectFieldBuilder`
* Removed the `typename` argument from the constructor
* `BaseTypeRef`
* Added `SchemaTypes` as a new Generic parameter
* `EnumTypeRef`
* Added `SchemaTypes` as a new Generic parameter
* `InputObjectRef`
* Added `SchemaTypes` as a new Generic parameter
* `InputRef`
* Added `SchemaTypes` as a new Generic parameter
* `OutputTypeRef`
* Added `SchemaTypes` as a new Generic parameter
* `ListRef`
* Added `SchemaTypes` as a new Generic parameter
* `InterfaceRef`
* Added `SchemaTypes` as a new Generic parameter
* `ObjectRef`
* Added `SchemaTypes` as a new Generic parameter
* `ScalarRef`
* Added `SchemaTypes` as a new Generic parameter
* `UnionRef`
* Added `SchemaTypes` as a new Generic parameter
* `FieldRef`
* Added `SchemaTypes` as a new Generic parameter
* removed the typename from constructor args
* add the builder and Field options as arguments for the constructor
* `InputFieldRef`
* Added `SchemaTypes` as a new Generic parameter
* removed the typename and kind from constructor args
* add the builder and Field options as arguments for the constructor
* split argument refs into a new `ArgumentRef` class
## Exported types
* `*FieldThunk`
* Updated to return a `GenericFieldRef`
* `FieldMap`
* Updated to `Record>;`
* `InputFieldMap`
* Updated to `Record>;`
* `InputFieldsFromShape`
* Added `SchemaTypes` as a new Generic parameter
* `InputShapeFromField`
* Updated to accept a `GenericFieldRef`
## Field options
The global interfaces for FieldOptions no-longer include the `resolve` option, which has moved to
the `InferredFieldOptions` interface to allow plugins to replace or change the resolve functions
types globally.
This means that when extending the `FieldOptionsByKind` interface, if you previously extended one of
the built in Field option interfaces, you will need to update your types to include the `resolve`
function types as well:
```diff
export interface FieldOptionsByKind<
Types extends SchemaTypes,
ParentShape,
Type extends TypeParam,
Nullable extends FieldNullability,
Args extends InputFieldMap,
ResolveShape,
ResolveReturnShape,
> {
- CustomObjectObject: CustomOptions &
- PothosSchemaTypes.ObjectFieldOptions<
- Types,
- ParentShape,
- Type,
- Nullable,
- Args,
- ResolveReturnShape
- >;
+ CustomObjectObject: CustomOptions &
+ PothosSchemaTypes.ObjectFieldOptions<
+ Types,
+ ParentShape,
+ Type,
+ Nullable,
+ Args,
+ ResolveReturnShape
+ > &
+ InferredFieldOptionsByKind<
+ Types,
+ Types['InferredFieldOptionsKind'],
+ ParentShape,
+ Type,
+ Nullable,
+ Args,
+ ResolveReturnShape
+ >;
}
```
The `InferredFieldOptionsByKind` interface can be used to get the `resolve` option by default, but
will also work for plugins that replace the `resolve` function with a different options for
configuring how a field is resolved. Some custom object types may want to explicitly define a
`resolve` option type, or omit it entirely (eg, the SimpleObject plugin does not use resolvers).
# Add GraphQL plugin
URL: /docs/plugins/add-graphql
A plugin for adding existing GraphQL types to Pothos
***
title: Add GraphQL plugin
description: A plugin for adding existing GraphQL types to Pothos
-----------------------------------------------------------------
This plugin makes it easy to integrate GraphQL types from existing schemas into your Pothos API
It can be used for incremental migrations from nexus, graphql-tools, or any other JS/TS executable
schema.
## Install
```package-install
npm install --save @pothos/plugin-add-graphql
```
## Setup
```typescript
import AddGraphQLPlugin from '@pothos/plugin-add-graphql';
const builder = new SchemaBuilder({
plugins: [AddGraphQLPlugin],
});
```
## Usage
There are 2 ways you can reference existing types.
* Adding types (or a whole external schema) when setting up the builder
* Adding types as Refs using new builder methods
### Adding types when creating your builder
Adding types to the builder will automatically include the types in your schema when it's built.
Types will only be added if no existing type of the same name is added to the builder before
building the schema.
Adding types recursively adds any other types that the added type depends in it's fields,
interfaces, or union members.
```ts
import { existingSchema } from './existing-schema-location';
const builder = new SchemaBuilder({
plugins: [AddGraphQLPlugin],
add: {
// You can add individual types
// This accepts Any GraphQLNamedType (Objects, Interface, Unions, Enums, Scalars, and InputObjects)
types: [existingSchema.getType('User'), existingSchema.getType('Post')],
// Or you can add an entire external schema
schema: existingSchema,
},
});
```
Adding types by themselves isn't very useful, so you'll probably want to be able to reference them
when defining fields in your schema. To do this, you can add them to the builders generic Types.
This currently only works for `Object`, `Interface`, and `Scalar` types. For other types, use the
builder methods below to create refs to the added types.
```ts
import { existingSchema } from './existing-schema-location';
const builder = new SchemaBuilder<{
Objects: {
User: UserType;
};
Interfaces: {
ExampleInterface: { id: string };
};
Scalars: {
DateTime: {
Output: Date;
Input: Date;
};
};
}>({
plugins: [AddGraphQLPlugin],
add: {
types: [
existingSchema.getType('User'),
existingSchema.getType('ExampleInterface'),
existingSchema.getType('DateTime'),
],
},
});
builder.queryFields((t) => ({
user: t.field({ type: 'User', resolve: () => getUser() }),
exampleInterface: t.field({ type: 'ExampleInterface', resolve: () => getThings() }),
now: t.field({ type: 'DateTime', resolve: () => new Date() }),
}));
```
### Adding types using builder methods
#### Objects
```ts
// Passing in a generic type is recommended to ensure type-safety
const UserRef = builder.addGraphQLObject(
existingSchema.getType('User') as GraphQLObjectType,
{
// Optionally you can override the types name
name: 'AddedUser',
// You can also pass in any other options you can define for normal object types
description: 'This type represents Users',
},
);
const PostRef = builder.addGraphQLObject<{
id: string;
title: string;
content: string;
}>(existingSchema.getType('Post') as GraphQLObjectType, {
fields: (t) => ({
// remove existing title field from type
title: null,
// add new titleField
postTitle: t.exposeString('title'),
}),
});
```
You can then use the returned references when defining fields:
```ts
builder.queryFields((t) => ({
posts: t.field({
type: [PostRef],
resolve: () => loadPosts(),
}),
}));
```
### Interfaces
```ts
const NodeRef = builder.addGraphQLInterface(
existingSchema.getType('Node') as GraphQLInterfaceType,
{
// interface options
},
);
```
### Unions
```ts
const SearchResult = builder.addGraphQLUnion(
existingSchema.getType('SearchResult') as GraphQLUnionType,
{
// union options
},
);
```
### Enums
```ts
const OrderBy = builder.addGraphQLEnum<'Asc' | 'Desc'>(
existingSchema.getType('OrderBy') as GraphQLEnumType,
{
// enum options
},
);
```
### Input objects
```ts
const PostFilter = builder.addGraphQLInput<{ title?: string, tags? string[] }>(
existingSchema.getType('PostFilter') as GraphQLInputObjectType,
{
// input options
},
);
```
### Scalars
This plugin does not add a new method for scalars, because Pothos already has a method for adding
existing scalar types.
```ts
builder.addScalarType('DateTime', existingSchema.getType('DateTime') as GraphQLScalar, {
// scalar options
});
```
# Complexity plugin
URL: /docs/plugins/complexity
Complexity plugin docs for Pothos
***
title: Complexity plugin
description: Complexity plugin docs for Pothos
----------------------------------------------
This plugin allows you to define complexity of fields and limit the maximum complexity, depth, and
breadth of queries.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-complexity
```
### Setup
```typescript
import ComplexityPlugin from '@pothos/plugin-complexity';
const builder = new SchemaBuilder({
plugins: [ComplexityPlugin],
});
```
### Configure defaults and limits
To limit query complexity you can specify a maximum complexity either in the builder setup, or when
building the schema:
```typescript
const builder = new SchemaBuilder({
plugins: [ComplexityPlugin],
complexity: {
defaultComplexity: 1,
defaultListMultiplier: 10,
limit: {
complexity: 500,
depth: 10,
breadth: 50,
},
// or
limit: (ctx) => ({
complexity: 500,
depth: 10,
breadth: 50,
}),
},
});
// or
const schema = builder.toSchema({
complexity: {
limit: {
complexity: 500,
depth: 10,
breadth: 50,
},
},
});
```
#### Options
* fieldComplexity: (optional,
`(args, ctx, field) => { complexity: number, multiplier: number} | number`): default complexity
calculation for fields. `defaultComplexity` and `defaultListMultiplier` will not be used if this
is set.
* defaultComplexity: (optional `number`) defines the default complexity for every field in the
schema
* defaultListMultiplier: (optional `number`) defines a default complexity multiplier for a list
fields sub selections
* limit: Defines limits for queries, passed the context object if `limit` is a function
* complexity: defines the maximum complexity allowed for queries
* depth: defines the maximum depth of selections in a query
* breadth: defines the maximum total selections in a query
* complexityError: (optional `function`) defines the error to throw when the query complexity
exceeds the limit. The function is passed the errorKind (depth, breadth, or complexity), the
result (with the depth, breadth, complext, and max values), and a GraphQL `info` object. It should
return (or throw) an error, or a an error message as a string
### How complexity is calculated
Complexity is calculated before resolving root any root level fields (query, mutation,
subscription), and is based purely on the shape of the query before execution begins.
The complexity of a query is the sum of the complexity of each selected field. If a field has
sub-selections, the complexity of its sub-selections are multiplied by a fields multiplier, and then
added to the fields own complexity. The default multiplier for fields is 1, and 10 for list fields.
This multiplier is meant to the n+1 complexity of list fields.
#### Example
The following query has a complexity of `131` (assuming we are using the default options), a depth
of `3`, and a breadth of `5`:
```gql
query {
posts {
# complexity = 131 (posts + 10 * (2 + 11))
author {
# complexity = 2 (author + 1 * name)
name # complexity = 1, depth: 3
}
comments {
# complexity = 11 (comments + 10 * comment)
comment # complexity = 1, depth: 3
}
}
}
```
### Defining complexity of a field:
You can set a custom complexity value on any field:
```typescript
builder.queryFields((t) => ({
posts: t.field({
type: [Post],
complexity: 20,
}),
}));
```
The complexity option can also set the multiplier for a field:
```typescript
builder.queryFields((t) => ({
posts: t.field({
type: [Post],
complexity: { field: 5, multiplier: 20 },
}),
}));
```
A fields complexity can also be based on the fields arguments, or the context value:
```typescript
builder.queryFields((t) => ({
posts: t.field({
type: [Post],
args: {
limit: t.arg.int(),
},
// base multiplier on how many posts are being requested
complexity: (args, ctx) => ({ field: 5, multiplier: args.limit ?? 5 }),
}),
}));
```
## Utilities
### `complexityFromQuery(query, options)`
Returns the query complexity for a given GraphQL query.
```typescript
const complexity = complexityFromQuery(query, {
schema: schema,
// Complexity can be calculated based on the context and arguments,
// so you may need to provide valid values for the context and arguments.
// Both are optional, and will default to empty objects.
context: {},
variables: {},
});
```
# Dataloader plugin
URL: /docs/plugins/dataloader
Dataloader plugin docs for Pothos
***
title: Dataloader plugin
description: Dataloader plugin docs for Pothos
----------------------------------------------
This plugin makes it easy to add fields and types that are loaded through a dataloader.
## Usage
### Install
To use the dataloader plugin you will need to install both the `dataloader` package and the Pothos
dataloader plugin:
```package-install
npm install --save dataloader @pothos/plugin-dataloader
```
### Setup
```typescript
import DataloaderPlugin from '@pothos/plugin-dataloader';
const builder = new SchemaBuilder({
plugins: [DataloaderPlugin],
});
```
### loadable objects
To create an object type that can be loaded with a dataloader use the new `builder.loadableObject`
method:
```typescript
const User = builder.loadableObject('User', {
// load will be called with ids of users that need to be loaded
// Note that the types for keys (and context if present) are required
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
fields: (t) => ({
id: t.exposeID('id', {}),
username: t.string({
// the shape of parent will be inferred from `loadUsersById()` above
resolve: (parent) => parent.username,
}),
}),
});
```
It is **VERY IMPORTANT** to return values from `load` in an order that exactly matches the order of
the requested IDs. The order is used to map results to their IDs, and if the results are returned in
a different order, your GraphQL requests will end up with the wrong data. Correctly sorting results
returned from a database or other data source can be tricky, so there this plugin has a `sort`
option (described below) to simplify the sorting process. For more details on how the load function
works, see the [dataloader docs](https://github.com/graphql/dataloader#batch-function).
When defining fields that return `User`s, you will now be able to return either a `string` (based in
ids param of `load`), or a User object (type based on the return type of `loadUsersById`).
```typescript
builder.queryType({
fields: (t) => ({
user: t.field({
type: User,
args: {
id: t.arg.string({ required: true }),
},
// Here we can just return the ID directly rather than loading the user ourselves
resolve: (root, args) => args.id,
}),
currentUser: t.field({
type: User,
// If we already have the user, we use it, and the dataloader will not be called
resolve: (root, args, context) => context.currentUser,
}),
users: t.field({
type: [User],
args: {
ids: t.arg.stringList({ required: true }),
},
// Mixing ids and user objects also works
resolve: (_root, args, context) => [...args.ids, context.CurrentUser],
}),
}),
});
```
Pothos will detect when a resolver returns `string`, `number`, or `bigint` (typescript will
constrain the allowed types to whatever is expected by the load function). If a resolver returns an
object instead, Pothos knows it can skip the dataloader for that object.
### loadable fields
In some cases you may need more granular dataloaders. To handle these cases there is a new
`t.loadable` method for defining fields with their own dataloaders.
```typescript
// Normal object that the fields below will load
interface PostShape {
id: string;
title: string;
content: string;
}
const Post = builder.objectRef('Post').implement({
fields: (t) => ({
id: t.exposeID('id', {}),
title: t.exposeString('title', {}),
content: t.exposeString('title', {}),
}),
});
// Loading a single Post
builder.objectField(User, 'latestPost', (t) =>
t.loadable({
type: Post,
// will be called with ids of latest posts for all users in query
load: (ids: number[], context) => context.loadPosts(ids),
resolve: (user, args) => user.lastPostID,
}),
);
// Loading multiple Posts
builder.objectField(User, 'posts', (t) =>
t.loadable({
type: [Post],
// will be called with ids of posts loaded for all users in query
load: (ids: number[], context) => context.loadPosts(ids),
resolve: (user, args) => user.postIDs,
}),
);
```
### loadableList fields for one-to-many relations
`loadable` fields can return lists, but do not work for loading a list of records from a single id.
The `loadableList` method can be used to define loadable fields that represent this kind of
relationship.
```typescript
// Loading multiple Posts
builder.objectField(User, 'posts', (t) =>
t.loadableList({
// type is singular, but will create a list field
type: Post,
// will be called with ids of all the users, and should return `Post[][]`
load: (ids: number[], context) => context.postsByUserIds(ids),
resolve: (user, args) => user.id,
}),
);
```
### loadableGroup fields for one-to-many relations
In many cases, it's easier to load a flat list in a dataloader rather than loading a list of lists.
the `loadableGroup` method simplifies this.
```typescript
// Loading multiple Posts
builder.objectField(User, 'posts', (t) =>
t.loadableGroup({
// type is singular, but will create a list field
type: Post,
// will be called with ids of all the users, and should return `Post[]`
load: (ids: number[], context) => db.posts.findMany({ where: { authorId: { in: ids } } }),
// will be called with each post to determine which group it belongs to
group: (post) => post.authorId,
resolve: (user, args) => user.id,
}),
);
```
### Accessing args on loadable fields
By default the `load` method for fields does not have access to the fields arguments. This is
because the dataloader will aggregate the calls across different selections and aliases that may not
have the same arguments. To access the arguments, you can pass `byPath: true` in the fields options.
This will cause the dataloader to only aggregate calls for the same "path" in the query, meaning all
calls share the same arguments. This will allow you to access a 3rd `args` argument on the `load`
method.
```typescript
builder.objectField(User, 'posts', (t) =>
t.loadable({
type: [Post],
byPath: true,
args: {
limit: t.arg.int({ required: true }),
},
load: (ids: number[], context, args) => context loadPostsByUserIds(ids, args.limit),
resolve: (user, args) => user.id,
}),
);
```
### dataloader options
You can provide additional options for your dataloaders using `loaderOptions`.
```typescript
const User = builder.loadableObject('User', {
loaderOptions: { maxBatchSize: 20 },
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
fields: (t) => ({ id: t.exposeID('id', {}) }),
});
builder.objectField(User, 'posts', (t) =>
t.loadable({
type: [Post],
loaderOptions: { maxBatchSize: 20 },
load: (ids: number[], context) => context.loadPosts(ids),
resolve: (user, args) => user.postIDs,
}),
);
```
See [dataloader docs](https://github.com/graphql/dataloader#api) for all available options.
### Manually using dataloader
Dataloaders for "loadable" objects can be accessed via their ref by passing in the context object
for the current request. dataloaders are not shared across requests, so we need the context to get
the correct dataloader for the current request:
```typescript
// create loadable object
const User = builder.loadableObject('User', {
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
fields: (t) => ({
id: t.exposeID('id', {}),
}),
});
builder.queryField('user', (t) =>
t.field({
type: User,
resolve: (parent, args, context) => {
// get data loader for User type
const loader = User.getDataloader(context);
// manually load a user
return loader.load('123');
},
}),
);
```
### Errors
Calling dataloader.loadMany will resolve to a value like `(Type | Error)[]`. Your `load` function
may also return results in that format if your loader can have parital failures. GraphQL does not
have special handling for Error objects. Instead Pothos will map these results to something like
`(Type | Promise)[]` where Errors are replaced with promises that will be rejected. This
allows the normal graphql resolver flow to correctly handle these errors.
If you are using the `loadMany` method from a dataloader manually, you can apply the same mapping
using the `rejectErrors` helper:
```typescript
import { rejectErrors } from '@pothos/plugin-dataloader';
builder.queryField('user', (t) =>
t.field({
type: [User],
resolve: (parent, args, context) => {
const loader = User.getDataloader(context);
return rejectErrors(loader.loadMany(['123', '456']));
},
}),
);
```
### (Optional) Adding loaders to context
If you want to make dataloaders accessible via the context object directly, there is some additional
setup required. Below are a few options for different ways you can load data from the context
object. You can determine which of these options works best for you or add you own helpers.
First you'll need to update the types for your context type:
```typescript
import { LoadableRef } from '@pothos/plugin-dataloader';
export interface ContextType {
userLoader: DataLoader; // expose a specific loader
getLoader: (ref: LoadableRef) => DataLoader; // helper to get a loader from a ref
load: (ref: LoadableRef, id: K) => Promise; // helper for loading a single resource
loadMany: (ref: LoadableRef, ids: K[]) => Promise<(Error | V)[]>; // helper for loading many
// other context fields
}
```
next you'll need to update your context factory function. The exact format of this depends on what
graphql server implementation you are using.
```typescript
import { initContextCache } from '@pothos/core';
import { LoadableRef, rejectErrors } from '@pothos/plugin-dataloader';
export const createContext = (req, res): ContextType => ({
// Adding this will prevent any issues if you server implementation
// copies or extends the context object before passing it to your resolvers
...initContextCache(),
// using getters allows us to access the context object using `this`
get userLoader() {
return User.getDataloader(this);
},
get getLoader() {
return (ref: LoadableRef) => ref.getDataloader(this);
},
get load() {
return (ref: LoadableRef, id: K) => ref.getDataloader(this).load(id);
},
get loadMany() {
return (ref: LoadableRef, ids: K[]) =>
rejectErrors(ref.getDataloader(this).loadMany(ids));
},
});
```
Now you can use these helpers from your context object:
```typescript
builder.queryFields((t) => ({
fromContext1: t.field({
type: User,
resolve: (root, args, { userLoader }) => userLoader.load('123'),
}),
fromContext2: t.field({
type: User,
resolve: (root, args, { getLoader }) => getLoader(User).load('456'),
}),
fromContext3: t.field({
type: User,
resolve: (root, args, { load }) => load(User, '789'),
}),
fromContext4: t.field({
type: [User],
resolve: (root, args, { loadMany }) => loadMany(User, ['123', '456']),
}),
}));
```
### Using with the Relay plugin
If you are using the Relay plugin, there is an additional method `loadableNode` that gets added to
the builder. You can use this method to create `node` objects that work like other loadeble objects.
```typescript
const UserNode = builder.loadableNode('UserNode', {
id: {
resolve: (user) => user.id,
},
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
fields: (t) => ({}),
});
```
#### Loadable connections
To data-load a connection, you can use a combination of helpers:
* `builder.connectionObject` To create the connection and edge types
* `builder.loadable` with the `byPath` option to create a loadable field with access to arguments
* `t.arg.connectionArgs` to add the standard connection arguments to the field
```typescript
const UserFriendsConnection = builder.connectionObject({
type: User,
name: 'UserFriendsConnection',
});
builder.objectFields(User, (t) => ({
friends: t.loadable({
type: UserFriendsConnection,
byPath: true,
args: {
...t.arg.connectionArgs(),
},
load: async (ids, context, args) => {
// This implementation assumes you will load all friends for each user, and then filter them with `resolveArrayConnection`.
// This may not be efficient in a large production system
const friendsById = await context.loadFriendsByUserIds(ids);
return ids.map((id) => {
return resolveArrayConnection({ args }, friendsById[id]);
});
},
resolve: (user) => user.id.toString(),
}),
}));
```
### Loadable Refs and Circular references
You may run into type errors if you define 2 loadable objects that circularly reference each other
in their definitions.
There are a some general strategies to avoid this outlined in the
[circular-references guide](../guide/circular-references).
This plug also has methods for creating refs (similar to `builder.objectRef`) that can be used to
split the definition and implementation of your types to avoid any issues with circular references.
```typescript
const User = builder.loadableObjectRef('User', {
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
});
User.implement({
fields: (t) => ({
id: t.exposeID('id', {}),
}),
});
// Or with relay
const UserNode = builder.loadableNodeRef('UserNode', {
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
id: {
resolve: (user) => user.id,
},
});
UserNode.implement({
isTypeOf: (obj) => obj instanceof User,
fields: (t) => ({}),
});
```
All the plugin specific options should be passed when defining the ref. This allows the ref to be
used by any method that accepts a ref to implement an object:
```typescript
const User = builder.loadableObjectRef('User', {
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
});
builder.objectType(User, {
fields: (t) => ({
id: t.exposeID('id', {}),
}),
});
```
The above example is not useful on its own, but this pattern will allow these refs to be used with
other that also allow you to define object types with additional behaviors.
### Caching resources loaded manually in a resolver
When manually loading a resource in a resolver it is not automatically added to the dataloader
cache. If you want any resolved value to be stored in the cache in case it is used somewhere else in
the query you can use the `cacheResolved` option.
The `cacheResolved` option takes a function that converts the loaded object into it's cache Key:
```typescript
const User = builder.loadableObject('User', {
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
cacheResolved: user => user.id,
fields: (t) => ({
id: t.exposeID('id', {}),
...
}),
});
```
Whenever a resolver returns a User or list or Users, those objects will automatically be added the
dataloaders cache, so they can be re-used in other parts of the query.
### Sorting results from your `load` function
As mentioned above, the `load` function must return results in the same order as the provided array
of IDs. Doing this correctly can be a little complicated, so this plugin includes an alternative.
For any type or field that creates a dataloader, you can also provide a `sort` option which will
correctly map your results into the correct order based on their ids. To do this, you will need to
provide a function that accepts a result object, and returns its id.
```typescript
const User = builder.loadableObject('User', {
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
sort: user => user.id,
fields: (t) => ({
id: t.exposeID('id', {}),
...
}),
});
```
This will also work with loadable nodes, interfaces, unions, or fields.
When sorting, if the list of results contains an Error the error is thrown because it can not be
mapped to the correct location. This `sort` option should NOT be used for cases where the result
list is expected to contain errors.
### Shared `toKey` method.
Defining multiple functions to extract the key from a loaded object can become redundant. In cases
when you are using both `cacheResolved` and `sort` you can use a `toKey` function instead:
```typescript
const User = builder.loadableObject('User', {
load: (ids: string[], context: ContextType) => context.loadUsersById(ids),
toKey: user => user.id,
cacheResolved: true,
sort: true,
fields: (t) => ({
id: t.exposeID('id', {}),
...
}),
});
```
### Subscriptions
Dataloaders are stored on the context object of the subscription. This means that values are cached across the full lifetime of the subscription.
To reset all data loaders for the current subscription, you can use the `clearAllDataLoaders` helper.
```typescript
import { clearAllDataLoaders } from '@pothos/plugin-dataloader';
clearAllDataLoaders(context);
```
# Directive plugin
URL: /docs/plugins/directives
Directive plugin docs for Pothos
***
title: Directive plugin
description: Directive plugin docs for Pothos
---------------------------------------------
A plugin for using schema directives with schemas generated by Pothos.
Schema Directives are not intended to be used with code first schemas, but there is a large existing
community with several very useful directives based
## Usage
### Install
```package-install
npm install --save @pothos/plugin-directives
```
### Setup
```typescript
import DirectivePlugin from '@pothos/plugin-directives';
import { rateLimitDirective } from 'graphql-rate-limit-directive';
const builder = new SchemaBuilder<{
Directives: {
rateLimit: {
locations: 'OBJECT' | 'FIELD_DEFINITION';
args: { limit: number, duration: number };
};
};
}>({
plugins: [DirectivePlugin],
directives: {
useGraphQLToolsUnorderedDirectives: true,
}
});
builder.queryType({
directives: {
rateLimit: { limit: 5, duration: 60 },
},
fields: (t) => ({
hello: t.string({ resolve: () => 'world' });
});
});
const { rateLimitDirectiveTransformer } = rateLimitDirective();
const schema = rateLimitDirectiveTransformer(builder.toSchema());
```
The directives plugin allows you to define types for the directives your schema will use the
`SchemaTypes` parameter. Each directive can define a set of locations the directive can appear, and
an object type representing the arguments the directive accepts.
The valid locations for directives are:
* `ARGUMENT_DEFINITION`
* `ENUM_VALUE`
* `ENUM`
* `FIELD_DEFINITION`
* `INPUT_FIELD_DEFINITION`
* `INPUT_OBJECT`
* `INTERFACE`
* `OBJECT`
* `SCALAR`
* `SCHEMA`
* `UNION`
Pothos does not apply the directives itself, this plugin simply adds directive information to the
extensions property of the underlying GraphQL type so that it can be consumed by other tools like
`graphql-tools`.
By default this plugin uses the format that Gatsby uses (described
[here](https://github.com/graphql/graphql-js/issues/1343#issuecomment-479871020)). This format
[was not supported by older versions of `graphql-tools`](https://github.com/ardatan/graphql-tools/issues/2534).
To support older versions of `graphql-tools` or directives that provide a schema visitor based on an
older graphql-tools version like the rate-limit directive from the example above you can set the
`useGraphQLToolsUnorderedDirectives` option. This option does not preserve the order that directives
are defined in. This will be okay for most cases, but may cause issues if your directives need to be
applied in a specific order.
To define directives on your fields or types, you can add a `directives` property in any of the
supported locations using one of the following 2 formats:
```typescript
{
directives: [
{
name: "validation",
args: {
regex: "/abc+/"
}
},
{
name: "required",
args: {},
}
],
// or
directives: {
validation: {
regex: "/abc+/"
},
required: {}
}
}
```
Each of these applies the same 2 directives. The first format is preferred, especially when using
directives that are sensitive to ordering, or can be repeated multiple times for the same location.
## Applying directives
For most locations (On fields and types) the options object for the field or type will have a
`directives` option which can be used to define directives.
To apply `SCHEMA` directives, you can use the `schemaDirectives` option on the `toSchema` method.
`directives` on `toSchema` is reserved for the Directive implementations.
# Drizzle plugin
URL: /docs/plugins/drizzle
A plugin to support efficient queries through drizzles relational query builder API
***
title: Drizzle plugin
description: A plugin to support efficient queries through drizzles relational query builder API
------------------------------------------------------------------------------------------------
import { Callout } from 'fumadocs-ui/components/callout';
This package is new, and depends on drizzle [RQBV2 API](https://rqbv2.drizzle-orm-fe.pages.dev/docs/rqb-v2#include-custom-fields). There are still some
missing features, and the API may still change. This package currently requires using the `beta` tag for drizzle.
If you are upgrading from an older version of this plugin, please read the [drizzle migration guide](https://rqbv2.drizzle-orm-fe.pages.dev/docs/relations-v1-v2)
and refer the the changelog for this package for Pothos specific changes.
## Installing
```package-install
npm install --save @pothos/plugin-drizzle drizzle-orm@beta
```
The drizzle plugin is built on top of drizzles relational query builder, and requires that you
define and configure all the relevant relations in your drizzle schema. See
[https://rqbv2.drizzle-orm-fe.pages.dev/docs/relations-v2](https://rqbv2.drizzle-orm-fe.pages.dev/docs/relations-v2) for detailed documentation on the relations API.
Once you have configured you have configured you drizzle schema, you can initialize your Pothos
SchemaBuilder with the drizzle plugin:
```ts
import { drizzle } from 'drizzle-orm/...';
// Import the appropriate getTableConfig for your dialect
import { getTableConfig } from 'drizzle-orm/sqlite-core';
import SchemaBuilder from '@pothos/core';
import DrizzlePlugin from '@pothos/plugin-drizzle';
import { relations } from './db/relations';
const db = drizzle(client, { relations });
type DrizzleRelations = typeof relations
export interface PothosTypes {
DrizzleRelations: DrizzleRelations;
}
const builder = new SchemaBuilder({
plugins: [DrizzlePlugin],
drizzle: {
client: db, // or (ctx) => db if you want to create a request specific client
getTableConfig,
relations,
},
});
```
### Integration with other plugins
The drizzle plugin has integrations with several other plugins. While the `with-input` and `relay`
plugins are not required, many examples will assume these plugins have been installed:
```ts
import { drizzle } from 'drizzle-orm/...';
import SchemaBuilder from '@pothos/core';
import DrizzlePlugin from '@pothos/plugin-drizzle';
import RelayPlugin from '@pothos/plugin-relay';
import WithInputPlugin from '@pothos/plugin-with-input';
import { getTableConfig } from 'drizzle-orm/sqlite-core';
import { relations } from './db/relations';
const db = drizzle(client, { relations });
export interface PothosTypes {
DrizzleRelations: typeof relations;
}
const builder = new SchemaBuilder({
plugins: [RelayPlugin, WithInputPlugin, DrizzlePlugin],
drizzle: {
client: db,
getTableConfig,
relations,
},
});
```
## Defining Objects
The `builder.drizzleObject` method can be used to define GraphQL Object types based on a drizzle
table:
```ts
const UserRef = builder.drizzleObject('users', {
name: 'User',
fields: (t) => ({
firstName: t.exposeString('firstName'),
lastName: t.exposeString('lastName'),
}),
});
```
You will be able to "expose" any column in the table, and GraphQL fields do not need to match the
names of the columns in your database. The returned `UserRef` can be used like any other `ObjectRef`
in Pothos.
## Custom fields
You will often want to define fields in your API that do not correspond to a specific database
column. To do this, you can define fields with a resolver like any other Pothos object type:
```ts
const UserRef = builder.drizzleObject('users', {
name: 'User',
fields: (t) => ({
fullName: t.string({
resolve: (user, args, ctx, info) => `${user.firstName} ${user.lastName}`,
}),
}),
});
```
## Type selections
In the above example, you can see that by default we have access to all columns of our table. For
tables with many columns, it can be more efficient to only select the needed columns. You can
configure the selected columns, and relations by passing a `select` option when defining the type:
```ts
const UserRef = builder.drizzleObject('users', {
name: 'User',
select: {
columns: {
firstName: true,
lastName: true,
},
with: {
profile: true,
},
extras: {
lowercaseName: (users, sql) => sql`lower(${users.firstName})`
},
},
fields: (t) => ({
fullName: t.string({
resolve: (user, args, ctx, info) => `${user.firstName} ${user.lastName}`,
}),
bio: t.string({
resolve: (user) => user.profile.bio,
}),
email: t.string({
resolve: (user) => `${user.lowercaseName}@example.com`,
}),
}),
});
```
Any selections added to the type will be available to consume in all resolvers. Columns that are not
selected can still be exposed as before.
## Field selections
The previous example allows you to control what gets selected by default, but you often want to only
select the columns that are required to fulfill a specific field. You can do this by adding the
appropriate selections on each field:
```ts
const UserRef = builder.drizzleObject('users', {
name: 'User',
select: {},
fields: (t) => ({
fullName: t.string({
select: {
columns: { firstName: true, lastName: true },
},
resolve: (user, args, ctx, info) => `${user.firstName} ${user.lastName}`,
}),
bio: t.string({
select: {
with: { profile: true },
},
resolve: (user) => user.profile.bio,
}),
email: t.string({
select: {
extras: {
lowercaseName: (users, sql) => sql`lower(${users.firstName})`
},
},
resolve: (user) => `${user.lowercaseName}@example.com`,
}),
}),
});
```
## Relations
Drizzles relational query builder allows you to define the relationships between your tables. The
`builder.relation` method makes it easy to add fields to your GraphQL API that implement those
relations:
```ts
builder.drizzleObject('profiles', {
name: 'Profile',
fields: (t) => ({
bio: t.exposeString('bio'),
}),
});
builder.drizzleObject('posts', {
name: 'Post',
fields: (t) => ({
title: t.exposeString('title'),
author: t.relation('author'),
}),
});
builder.drizzleObject('users', {
name: 'User',
fields: (t) => ({
firstName: t.exposeString('firstName'),
profile: t.relation('profile'),
posts: t.relation('posts'),
}),
});
```
The relation will automatically define GraphQL fields of the appropriate type based on the relation
defined in your drizzle schema.
## Relation queries
For some cases, exposing relations as fields without any customization works great, but in some
cases you may want to apply some filtering or ordering to your relations. This can be done by
specifying a `query` option on the relation:
```ts
builder.drizzleObject('users', {
name: 'User',
fields: (t) => ({
firstName: t.exposeString('firstName'),
posts: t.relation('posts', {
args: {
limit: t.arg.int(),
offset: t.arg.int(),
},
query: (args) => ({
limit: args.limit ?? 10,
offset: args.offset ?? 0,
where: {
published: true,
},
orderBy: {
updatedAt: 'desc',
},
}),
}),
drafts: t.relation('posts', {
query: {
where: {
published: false,
},
},
}),
}),
});
```
The query API enables you to define args and convert them into parameters that will be passed into
the relational query builder. You can read more about the relation query builder api
[here](https://orm.drizzle.team/docs/rqb#querying)
## Drizzle Fields
Drizzle objects and relations allow you to define parts of your schema backed by your drizzle
schema, but don't provide a clear entry point into this Graph of data. To make your drizzle objects
queryable, we will need to add fields that return our drizzle objects. This can be done using the
`t.drizzleField` method. This can be used to define fields on the root `Query` type, or any other
object type in your schema:
```ts
builder.queryType({
fields: (t) => ({
post: t.drizzleField({
type: 'posts',
args: {
id: t.arg.id({ required: true }),
},
resolve: (query, root, args, ctx) =>
db.query.posts.findFirst(
query({
where: {
id: Number.parseInt(args.id, 10),
},
}),
),
}),
posts: t.drizzleField({
type: ['posts'],
resolve: (query, root, args, ctx) => db.query.posts.findMany(query()),
}),
}),
});
```
The `resolve` function of a `drizzleField` will be passed a `query` function that MUST be called and
passed to a drizzle `findOne` or `findMany` query. The `query` function optionally accepts any
arguments that are normally passed into the query, and will merge these options with the selection
used to resolve data for the nested GraphQL selections.
## Variants
It is often useful to be able to define multiple object types based on the same table. This can be
done using a feature called `variants`. The `variants` API consists of 3 parts:
* A `variant` option that can be passed instead of a name on `drizzleObjects`
* The ability to pass an `ObjectRef` to the `type` option of `t.relation` and other similar fields
* A `t.field` method that works similar to \`t.relation, but is used to define a GraphQL field that
references a variant of the same record.
```ts
// Viewer type representing the current user
export const Viewer = builder.drizzleObject('users', {
variant: 'Viewer',
select: {},
fields: (t) => ({
id: t.exposeID('id'),
// A reference to the normal user type so normal user fields can be queried
user: t.variant('users'),
// Adding drafts to View allows a user to fetch their own drafts without exposing it for Other Users in the API
drafts: t.relation('posts', {
query: {
where: {
published: false,
},
orderBy: {
updatedAt: 'desc',
},
},
}),
}),
});
builder.queryType({
fields: (t) => ({
me: t.drizzleField({
// We can use the ref returned by builder.drizzleObject to define our `drizzleField`
type: Viewer,
resolve: (query, root, args, ctx) =>
db.query.users.findFirst(
query({
where: {
id: ctx.user.id,
},
}),
),
}),
}),
});
builder.drizzleNode('users', {
name: 'User',
fields: (t) => ({
firstName: t.exposeString('firstName'),
// This field will resolve to the Viewer type, but be set to null if the user is not the current user
viewer: t.variant(Viewer, {
isNull: (user, args, ctx) => user.id !== ctx.user?.id,
}),
}),
});
```
## Relay integration
Relay provides some very useful best practices that are useful for most GraphQL APIs. To make it
easy to comply with these best practices, the drizzle plugin has built in support for defining relay
`nodes` and `connections`.
## Relay Nodes
Defining relay nodes works just like defining normal `drizzleObject`s, but requires specifying a
column to use as the nodes `id` field.
```ts
builder.drizzleNode('users', {
name: 'User',
id: {
column: (user) => user.id,
// other options for the ID field can be passed here
},
fields: (t) => ({
firstName: t.exposeString('firstName'),
lastName: t.exposeString('lastName'),
}),
});
```
The id column can also be set to a list of columns for types with a composite primary key.
## Related connections
To implement a relation as a connection, you can use `t.relatedConnection` instead of `t.relation`:
```ts
builder.drizzleNode('users', {
name: 'User',
fields: (t) => ({
posts: t.relatedConnection('posts'),
}),
});
```
This will automatically define the `Connection`, and `Edge` types, and their respective fields. To
customize the Connection and Edge types, options for these types can be passed as additional
arguments to `t.relatedConnection`, just like `t.connection` from the relay plugin. See the
[relay plugin docs](https://pothos-graphql.dev/docs/plugins/relay) for more details.
You can also define a `query` like with `t.relation`. The only difference with `t.relatedConnection`
is that the `orderBy` format is slightly changed.
To comply with the relay spec and efficiently support backwards pagination, some queries need to be
performed in reverse order, which requires inverting the orderBy clause. To do this automatically,
the `t.relatedConnection` method accepts orderBy as an object like `{ asc: column }` or
`{ desc: column }` rather than using the `asc(column)` and `desc(column)` helpers from drizzle.
orderBy can still be returned as either a single column or array when ordering by multiple columns.
Ordering defaults to using the table `primaryKey`, and the orderBy columns will also be used to
derive the connections cursor.
```ts
builder.drizzleNode('users', {
name: 'User',
fields: (t) => ({
posts: t.relatedConnection('posts', {
query: () => ({
where: {
published: true,
},
orderBy: {
id: 'desc',
},
}),
}),
}),
});
```
## Drizzle connections
Similar to `t.drizzleField`, `t.drizzleConnection` allows you to define a connection field that acts
as an entry point to your drizzle query. The `orderBy` in `t.drizzleConnection` works the same way
as it does for `t.relatedConnection`
```ts
builder.queryFields((t) => ({
posts: t.drizzleConnection({
type: 'posts',
resolve: (query, root, args, ctx) =>
db.query.posts.findMany(
query({
where: {
published: true,
},
orderBy: {
id: 'desc',
},
}),
),
}),
}));
```
### Indirect relations as connections
In many cases, you can define many to many connections via drizzle relations, allowing the `relatedConnection` API to work across
more complex relations. In some cases you may want to define a connection for a relation not expressed directly as a relation in
your drizzle schema. For these cases, you can use the `drizzleConnectionHelpers`, which allows you to define connection with the `t.connection` API.
```typescript
// Create a drizzle object for the node type of your connection
const Role = builder.drizzleObject('roles', {
name: 'Role',
fields: (t) => ({
id: t.exposeID('id'),
name: t.exposeString('name'),
}),
});
// Create connection helpers for the media type. This will allow you
// to use the normal t.connection with a drizzle type
const rolesConnection = drizzleConnectionHelpers(builder, 'userRoles', {
// select the data needed for the nodes
select: (nestedSelection) => ({
with: {
// use nestedSelection to create the correct selection for the node
role: nestedSelection(),
},
}),
// resolve the node from the returned list item
resolveNode: (userRole) => userRole.role,
});
builder.drizzleObjectField('User', 'rolesConnection', (t) =>
t.connection({
// The type for the Node
type: Role,
// since we are not using t.relatedConnection we need to manually
// include the selections for our connection
select: (args, ctx, nestedSelection) => ({
with: {
userRoles: rolesConnection.getQuery(args, ctx, nestedSelection),
},
}),
resolve: (post, args, ctx) =>
// This helper takes a list of nodes and formats them for the connection
resolve: (user, args, ctx) => {
return rolesConnection.resolve(user.userRoles, args, ctx, user);
},
}),
);
```
The above example assumes that you are paginating a relation to a join table, where the pagination
args are applied based on the relation to that join table, but the nodes themselves are nested
deeper.
`drizzleConnectionHelpers` can also be used to manually create a connection where the edge and
connections share the same model, and pagination happens directly on a relation to nodes type (even
if that relation is nested).
```ts
const commentConnectionHelpers = drizzleConnectionHelpers(builder, 'Comment');
const SelectPost = builder.drizzleObject('posts', {
fields: (t) => ({
title: t.exposeString('title'),
comments: t.connection({
type: commentConnectionHelpers.ref,
select: (args, ctx, nestedSelection) => ({
with: {
comments: commentConnectionHelpers.getQuery(args, ctx, nestedSelection),
},
}),
resolve: (parent, args, ctx) => commentConnectionHelpers.resolve(parent.comments, args, ctx),
}),
}),
});
```
Arguments, ordering and filtering can also be defined in the helpers:
```ts
const rolesConnection = drizzleConnectionHelpers(builder, 'userRoles', {
// define additional arguments
args: (t) => ({}),
query: (args) => ({
// define an order
orderBy: {
roleId: 'asc',
}
// define a filter
where: {
accepted: true,
}
}),
// select the data needed for the nodes
select: (nestedSelection) => ({
with: {
// use nestedSelection to create the correct selection for the node
role: nestedSelection(),
},
}),
// resolve the node from the returned list item
resolveNode: (userRole) => userRole.role,
});
builder.drizzleObjectField('User', 'rolesConnection', (t) =>
t.connection({
type: Role,
// add the args from the connection helper to the field
args: rolesConnection.getArgs(),
select: (args, ctx, nestedSelection) => ({
with: {
userRoles: rolesConnection.getQuery(args, ctx, nestedSelection),
},
}),
resolve: (user, args, ctx) => rolesConnection.resolve(user.userRoles, args, ctx, user),
}),
);
```
### Extending connection edges
In some cases you may want to expose some data from an indirect connection on the edge object.
```typescript
const rolesConnection = drizzleConnectionHelpers(builder, 'userRoles', {
select: (nestedSelection) => ({
with: {
role: nestedSelection(),
},
}),
resolveNode: (userRole) => userRole.role,
});
builder.drizzleObjectFields('User', (t) => ({
rolesConnection: t.connection(
{
type: Role,
select: (args, ctx, nestedSelection) => ({
with: {
userRoles: rolesConnection.getQuery(args, ctx, nestedSelection),
},
}),
resolve: (user, args, ctx) =>
rolesConnection.resolve(
user.userRoles,
args,
ctx,
user,
),
},
{},
// options for the edge object
{
// define the additional fields on the edge object
fields: (edge) => ({
createdAt: edge.field({
type: 'DateTime',
// the parent shape for edge fields is inferred from the connections resolve function
resolve: (role) => role.createdAt,
}),
}),
},
),
}));
```
### `drizzleConnectionHelpers` for non-relation connections
You can also use `drizzleConnectionHelpers` for non-relation connections where you want a connection where your edges and nodes are not the same type.
Note that when doing this, you need to be careful to properly merge the `where` clause generated by the connection helper with any additional `where` clause you need to apply to your query
```typescript
const rolesConnection = drizzleConnectionHelpers(builder, 'userRoles', {
select: (nestedSelection) => ({
with: {
role: nestedSelection(),
},
}),
resolveNode: (userRole) => userRole.role,
});
builder.queryFields((t) => ({
roles: t.connection({
type: Role,
args: {
userId: t.arg.int({ required: true }),
},
nodeNullable: true,
resolve: async (_, args, ctx, info) => {
const query = rolesConnection.getQuery(args, ctx, info);
const userRoles = await db.query.userRoles.findMany({
...query,
where: {
...query.where,
userId: args.userId,
},
});
return rolesConnection.resolve(userRoles, args, ctx);
},
}),
}));
```
# Errors plugin
URL: /docs/plugins/errors
Errors plugin docs for Pothos
***
title: Errors plugin
description: Errors plugin docs for Pothos
------------------------------------------
A plugin for easily including error types in your GraphQL schema and hooking up error types to
resolvers
## Usage
### Install
```package-install
npm install --save @pothos/plugin-errors
```
### Setup
Ensure that the target in your `tsconfig.json` is set to `es6` or higher (default is `es3`).
### Example Usage
```typescript
import ErrorsPlugin from '@pothos/plugin-errors';
const builder = new SchemaBuilder({
plugins: [ErrorsPlugin],
errors: {
defaultTypes: [],
},
});
builder.objectType(Error, {
name: 'Error',
fields: (t) => ({
message: t.exposeString('message'),
}),
});
builder.queryType({
fields: (t) => ({
hello: t.string({
errors: {
types: [Error],
},
args: {
name: t.arg.string({ required: false }),
},
resolve: (parent, { name }) => {
if (name.slice(0, 1) !== name.slice(0, 1).toUpperCase()) {
throw new Error('name must be capitalized');
}
return `hello, ${name || 'World'}`;
},
}),
}),
});
```
The above example will produce a GraphQL schema that looks like:
```graphql
type Error {
message: String!
}
type Query {
hello(name: String!): QueryHelloResult
}
union QueryHelloResult = Error | QueryHelloSuccess
type QueryHelloSuccess {
data: String!
}
```
This field can be queried using fragments like:
```graphql
query {
hello(name: "World") {
__typename
... on Error {
message
}
... on QueryHelloSuccess {
data
}
}
}
```
This plugin works by wrapping fields that define error options in a union type. This union consists
of an object type for each error type defined for the field, and a Success object type that wraps
the returned data. If the fields resolver throws an instance of one of the defined errors, the
errors plugin will automatically resolve to the corresponding error object type.
### Builder options
* `defaultTypes`: An array of Error classes to include in every field with error handling.
* `directResult`: Sets the default for `directResult` option on fields (only affects non-list
fields)
* `defaultResultOptions`: Sets the defaults for `result` option on fields.
* `name`: Function to generate a custom name on the generated result types.
```ts
export const builderWithCustomErrorTypeNames = new SchemaBuilder<{}>({
plugins: [ErrorPlugin, ValidationPlugin],
errors: {
defaultTypes: [Error],
defaultResultOptions: {
name: ({ parentTypeName, fieldName }) => `${fieldName}_Custom`,
},
defaultUnionOptions: {
name: ({ parentTypeName, fieldName }) => `${fieldName}_Custom`,
},
},
});
```
* `defaultUnionOptions`: Sets the defaults for `result` option on fields.
* `name`: Function to generate a custom name on the generated union types.
```ts
export const builderWithCustomErrorTypeNames = new SchemaBuilder<{}>({
plugins: [ErrorPlugin, ValidationPlugin],
errors: {
defaultTypes: [Error],
defaultResultOptions: {
name: ({ parentTypeName, fieldName }) => `${fieldName}_Custom`,
},
defaultUnionOptions: {
name: ({ parentTypeName, fieldName }) => `${fieldName}_Custom`,
},
},
});
```
### Options on Fields
* `types`: An array of Error classes to catch and handle as error objects in the schema. Will be
merged with `defaultTypes` from builder.
* `union`: An options object for the union type. Can include any normal union type options, and
`name` option for setting a custom name for the union type.
* `result`: An options object for result object type. Can include any normal object type options,
and `name` option for setting a custom name for the result type.
* `dataField`: An options object for the data field on the result object. This field will be named
`data` by default, but can be written by passsing a custom `name` option.
* `directResult`: Boolean, can only be set to true for non-list fields. This will directly include
the fields type in the union rather than creating an intermediate Result object type. This will
throw at build time if the type is not an object type.
### Recommended Usage
1. Set up an Error interface
2. Create a BaseError object type
3. Include the Error interface in any custom Error types you define
4. Include the BaseError type in the `defaultTypes` in the builder config
This pattern will allow you to consistently query your schema using a `... on Error { message }`
fragment since all Error classes extend that interface. If your client want's to query details of
more specialized error types, they can just add a fragment for the errors it cares about. This
pattern should also make it easier to make future changes without unexpected breaking changes for
your clients.
The follow is a small example of this pattern:
```typescript
import ErrorsPlugin from '@pothos/plugin-errors';
const builder = new SchemaBuilder({
plugins: [ErrorsPlugin],
errors: {
defaultTypes: [Error],
},
});
const ErrorInterface = builder.interfaceRef('Error').implement({
fields: (t) => ({
message: t.exposeString('message'),
}),
});
builder.objectType(Error, {
name: 'BaseError',
interfaces: [ErrorInterface],
});
class LengthError extends Error {
minLength: number;
constructor(minLength: number) {
super(`string length should be at least ${minLength}`);
this.minLength = minLength;
this.name = 'LengthError';
}
}
builder.objectType(LengthError, {
name: 'LengthError',
interfaces: [ErrorInterface],
fields: (t) => ({
minLength: t.exposeInt('minLength'),
}),
});
builder.queryType({
fields: (t) => ({
// Simple error handling just using base error class
hello: t.string({
errors: {},
args: {
name: t.arg.string({ required: true }),
},
resolve: (parent, { name }) => {
if (!name.startsWith(name.slice(0, 1).toUpperCase())) {
throw new Error('name must be capitalized');
}
return `hello, ${name || 'World'}`;
},
}),
// Handling custom errors
helloWithMinLength: t.string({
errors: {
types: [LengthError],
},
args: {
name: t.arg.string({ required: true }),
},
resolve: (parent, { name }) => {
if (name.length < 5) {
throw new LengthError(5);
}
return `hello, ${name || 'World'}`;
},
}),
}),
});
```
### With zod plugin
To use this in combination with the zod plugin, ensure that that errors plugin is listed
BEFORE the zod plugin in your plugin list.
Once your plugins are set up, you can define types for a ZodError, the same way you would for any
other error type. Below is a simple example of how this can be done, but the specifics of how you
structure your error types are left up to you.
```typescript
// Util for flattening zod errors into something easier to represent in your Schema.
function flattenErrors(
error: ZodFormattedError,
path: string[],
): { path: string[]; message: string }[] {
const errors = error._errors.map((message) => ({
path,
message,
}));
Object.keys(error).forEach((key) => {
if (key !== '_errors') {
errors.push(
...flattenErrors((error as Record)[key] as ZodFormattedError, [
...path,
key,
]),
);
}
});
return errors;
}
// A type for the individual validation issues
const ZodFieldError = builder
.objectRef<{
message: string;
path: string[];
}>('ZodFieldError')
.implement({
fields: (t) => ({
message: t.exposeString('message'),
path: t.exposeStringList('path'),
}),
});
// The actual error type
builder.objectType(ZodError, {
name: 'ZodError',
interfaces: [ErrorInterface],
fields: (t) => ({
fieldErrors: t.field({
type: [ZodFieldError],
resolve: (err) => flattenErrors(err.format(), []),
}),
}),
});
builder.queryField('fieldWIthValidation', (t) =>
t.boolean({
errors: {
types: [ZodError],
},
args: {
string: t.arg.string({
validate: {
type: 'string',
minLength: 3,
},
}),
},
resolve: () => true,
}),
);
```
Example query:
```graphql
query {
validation(string: "a") {
__typename
... on QueryValidationSuccess {
data
}
... on ZodError {
fieldErrors {
message
path
}
}
}
}
```
### With the dataloader plugin
To use this in combination with the dataloader plugin, ensure that that errors plugin is listed
BEFORE the dataloader plugin in your plugin list.
If a field with `errors` returns a `loadableObject`, or `loadableNode` the errors plugin will now
catch errors thrown when loading ids returned by the `resolve` function.
If the field is a `List` field, errors that occur when resolving objects from `ids` will not be
handled by the errors plugin. This is because those errors are associated with each item in the list
rather than the list field itself. In the future, the dataloader plugin may have an option to throw
an error at the field level if any items can not be loaded, which would allow the error plugin to
handle these types of errors.
### With the prisma plugin
To use this in combination with the prisma plugin, ensure that that errors plugin is listed BEFORE
the prisma plugin in your plugin list. This will enable `errors` option to work correctly with any
field builder method from the prisma plugin.
`errors` can be configured for any field, but if there is an error pre-loading a relation the error
will always surfaced at the field that executed the query. Because there are cases that fall back to
executing queries for relation fields, these fields may still have errors if the relation was not
pre-loaded. Detection of nested relations will continue to work if those relations use the `errors`
plugin
### List item errors
For fields that return lists, you can specify `itemErrors` to wrap the list items in a union type so
that errors can be handled per-item rather than replacing the whole list with an error.
The `itemErrors` options are exactly the same as the `errors` options, but they are applied to each
item in the list rather than the whole list.
```typescript
builder.queryType({
fields: (t) => ({
listWithErrors: t.string({
itemErrors: {},
resolve: (parent, { name }) => {
return [
1,
2,
new Error('Boom'),
3,
]
},
}),
}),
});
```
This will produce a GraphQL schema that looks like:
```graphql
type Query {
listWithErrors: [QueryListWithErrorsItemResult!]!
}
union QueryListWithErrorsItemResult = Error | QueryListWithErrorsItemSuccess
type QueryListWithErrorsItemSuccess {
data: Int!
}
```
Item errors also works with both sync and async iterators (in graphql@>=17, or other executors that support the @stream directive):
```typescript
builder.queryType({
fields: (t) => ({
asyncListWithErrors: t.string({
itemErrors: {},
resolve: async function* () {
yield 1;
yield 2;
yield new Error('Boom');
yield 4;
throw new Error('Boom');
},
}),
}),
});
```
When an error is yielded, an error result will be added into the list, if the generator throws an error,
the error will be added to the list, and no more results will be returned for that field
You can also use the `errors` and `itemErrors` options together:
```typescript
builder.queryType({
fields: (t) => ({
listWithErrors: t.string({
itemErrors: {},
errors: {},
resolve: (parent, { name }) => {
return [
1,
new Error('Boom'),
3,
]
}),
}),
});
```
This will produce a GraphQL schema that looks like:
```graphql
type Query {
listWithErrors: [QueryListWithErrorsResult!]!
}
union QueryListWithErrorsResult = Error | QueryListWithErrorsSuccess
type QueryListWithErrorsSuccess {
data: [QueryListWithErrorsItemResult!]!
}
union QueryListWithErrorsItemResult = Error | QueryListWithErrorsItemSuccess
type QueryListWithErrorsItemSuccess {
data: Int!
}
```
# Federation plugin
URL: /docs/plugins/federation
Federation plugin docs for Pothos
***
title: Federation plugin
description: Federation plugin docs for Pothos
----------------------------------------------
A plugin for building subGraphs that are compatible with
[Apollo Federation 2](https://www.apollographql.com/docs/federation/).
## Usage
This page will describe the basics of the Pothos API for federation, but will not cover detailed
information on how federation works, or what all the terms on this page mean. For more general
information on federation, see the
[official docs](https://www.apollographql.com/docs/federation/v2/)
### Install
You will need to install the plugin, as well as the directives plugin (`@pothos/plugin-directives`)
and `@apollo/subgraph`
```package-install
npm install --save @pothos/plugin-federation @pothos/plugin-directives @apollo/subgraph
```
You will likely want to install @apollo/server as well, but it is not required if you want to use a
different server
```package-install
npm install --save @apollo/server
```
### Setup
```typescript
import DirectivePlugin from '@pothos/plugin-directives';
import FederationPlugin from '@pothos/plugin-federation';
const builder = new SchemaBuilder({
// If you are using other plugins, the federation plugin should be listed after plugins like auth that wrap resolvers
plugins: [DirectivePlugin, FederationPlugin],
});
```
### Defining entities
Defining entities for you schema is a 2 step process. First you will need to define an object type
as you would normally, then you can convert that object type to an entity by providing a `key` (or
`keys`), and a method to load that entity.
```typescript
const UserType = builder.objectRef('User').implement({
fields: (t) => ({
id: t.exposeID('id'),
name: t.exposeString('name'),
username: t.exposeString('username'),
}),
});
builder.asEntity(UserType, {
key: builder.selection<{ id: string }>('id'),
resolveReference: (user, users) => users.find(({ id }) => user.id === id),
});
```
`keys` are defined using `builder.selection`. This method *MUST* be called with a generic argument
that defines the types for any fields that are part of the key. `key` may also be an array.
`resolveReference` will be called with the type used by the `key` selection.
Entities are Object types that may be extended with or returned by fields in other services.
`builder.asEntity` describes how the Entity will be loaded when used by another services. The `key`
select (or selection) should use the types of scalars your server will produce for inputs. For
example, Apollo server will convert all ID fields to `string`s, even if resolvers in other services
returns IDs as numbers.
### Extending external entities
External entities can be extended by calling `builder.externalRef`, and then calling implement on
the returned ref.
`builder.externalRef` takes the name of the entity, a selection (using `builder.selection`, just
like a `key` on an entity object), and a resolve method that loads an object given a `key`. The
return type of the resolver is used as the backing type for the ref, and will be the type of the
`parent` arg when defining fields for this type. The `key` also describes what fields will be
selected from another service to use as the `parent` object in resolvers for fields added when
implementing the `externalRef`.
```typescript
const ProductRef = builder.externalRef(
'Product',
builder.selection<{ upc: string }>('upc'),
(entity) => {
const product = inventory.find(({ upc }) => upc === entity.upc);
// extends the entity ({upc: string}) with other product details available in this service
return product && { ...entity, ...product };
},
);
ProductRef.implement({
// Additional external fields can be defined here which can be used by `requires` or `provides` directives
externalFields: (t) => ({
price: t.float(),
weight: t.float(),
}),
fields: (t) => ({
// exposes properties added during loading of the entity above
upc: t.exposeString('upc'),
inStock: t.exposeBoolean('inStock'),
shippingEstimate: t.float({
// fields can add a `requires` directive for any of the externalFields defined above
// which will be made available as part of the first arg in the resolver.
requires: builder.selection<{ weight?: number; price?: number }>('price weight'),
resolve: (data) => {
// free for expensive items
if ((data.price ?? 0) > 1000) {
return 0;
}
// estimate is based on weight
return (data.weight ?? 0) * 0.5;
},
}),
}),
});
```
To set the `resolvable` property of an external field to `false`, can use `builder.keyDirective`:
```ts
const ProductRef = builder.externalRef(
'Product',
builder.keyDirective(builder.selection<{ upc: string }>('upc'), false),
);
```
### Adding a provides directive
To add a `@provides` directive, you will need to implement the Parent type of the field being
provided as an external ref, and then use the `.provides` method of the returned ref when defining
the field that will have the `@provides` directive. The provided field must be listed as an
`externalField` in the external type.
```typescript
const UserType = builder.externalRef('User', builder.selection<{ id: string }>('id')).implement({
externalFields: (t) => ({
// The field that will be provided
username: t.string(),
}),
fields: (t) => ({
id: t.exposeID('id'),
}),
});
const ReviewType = builder.objectRef('Review');
ReviewType.implement({
fields: (t) => ({
id: t.exposeID('id'),
body: t.exposeString('body'),
author: t.field({
// using UserType.provides<...>(...) instead of just UserType adds the provide annotations
// and ensures the resolved value includes data for the provided field
// The generic in Type.provides works the same as the `builder.selection` method.
type: UserType.provides<{ username: string }>('username'),
resolve: (review) => ({
id: review.authorID,
username: usernames.find((username) => username.id === review.authorID)!.username,
}),
}),
product: t.field({
type: Product,
resolve: (review) => ({ upc: review.product.upc }),
}),
}),
});
```
### Building your schema and starting a server
```typescript
// Use new `toSubGraphSchema` method to add subGraph specific types and queries to the schema
const schema = builder.toSubGraphSchema({
// defaults to v2.6
linkUrl: 'https://specs.apollo.dev/federation/v2.3',
// defaults to the list of directives used in your schema
federationDirectives: ['@key', '@external', '@requires', '@provides'],
});
const server = new ApolloServer({
schema,
});
startStandaloneServer(server, { listen: { port: 4000 } })
.then(({ url }) => {
console.log(`🚀 Server ready at ${url}`);
})
.catch((error) => {
throw error;
});
```
For a functional example that combines multiple graphs built with Pothos into a single schema see
[https://github.com/hayes/pothos/tree/main/packages/plugin-federation/tests/example](https://github.com/hayes/pothos/tree/main/packages/plugin-federation/tests/example)
### Printing the schema
If you are printing the schema as a string for any reason, and then using the printed schema for
Apollo Federation(submitting if using Managed Federation, or composing manually with `rover`), you
must use `printSubgraphSchema`(from `@apollo/subgraph`) or another compatible way of printing the
schema(that includes directives) in order for it to work.
### Field directives directives
Several federation directives can be configured directly when defining a field includes
`@shareable`, `@tag`, `@inaccessible`, and `@override`.
```ts
t.field({
type: 'String',
shareable: true,
tag: ['someTag'],
inaccessible: true,
override: { from: 'users' },
});
```
For more details on these directives, see the official Federation documentation.
### interface entities and @interfaceObject
Federation 2.3 introduces new features for federating interface definitions.
You can now pass interfaces to `asEntity` to defined keys for an interface:
```ts
const Media = builder.interfaceRef<{ id: string }>('Media').implement({
fields: (t) => ({
id: t.exposeID('id'),
...
}),
});
builder.asEntity(Media, {
key: builder.selection<{ id: string }>('id'),
resolveReference: ({ id }) => loadMediaById(id),
});
```
You can also extend interfaces from another subGraph by creating an `interfaceObject`:
```ts
const Media = builder.objectRef<{ id: string }>('Media').implement({
fields: (t) => ({
id: t.exposeID('id'),
// add new MediaFields here that are available on all implementors of the `Media` type
}),
});
builder.asEntity(Media, {
interfaceObject: true,
key: builder.selection<{ id: string }>('id'),
resolveReference: (ref) => ref,
});
```
See federation documentation for more details on `interfaceObject`s
### composeDirective =
You can apply the `composeDirective` directive when building the subgraph schema:
```ts
export const schema = builder.toSubGraphSchema({
// This adds the @composeDirective directive
composeDirectives: ['@custom'],
// composeDirective requires an @link directive on the schema pointing the the url for your directive
schemaDirectives: {
link: { url: 'https://myspecs.dev/myCustomDirective/v1.0', import: ['@custom'] },
},
// You currently also need to provide an actual implementation for your Directive
directives: [
new GraphQLDirective({
locations: [DirectiveLocation.OBJECT, DirectiveLocation.INTERFACE],
name: 'custom',
}),
],
});
```
# Grafast plugin
URL: /docs/plugins/grafast
A plugin for building schemas with Grafast plans instead of resolvers
***
title: Grafast plugin
description: A plugin for building schemas with Grafast plans instead of resolvers
----------------------------------------------------------------------------------
import { Callout } from 'fumadocs-ui/components/callout';
This package is currently experimental and will have breaking changes in the near future.
This plugin currently does not work with MOST other Pothos plugins.
Many plugins depend on wrapping resolvers to add runtime functionality to your schema, which will not work
with grafast.
## Install
```package-install
npm install --save @pothos/plugin-grafast grafast@>=0.1.1-beta.24
```
## Setup
```typescript
import GrafastPlugin from '@pothos/plugin-grafast';
declare global {
namespace Grafast {
// Define the Context type used by grafast
interface Context extends YourContextType {}
}
}
type BuilderTypes = {
// This tells the builder to expect plans instead of resolvers
InferredFieldOptionsKind: 'Grafast';
Context: YourContextType;
};
const builder = new SchemaBuilder({
plugins: [GrafastPlugin],
});
```
## Usage
For documentation on how to write plans, see the [Grafast documentation](https://grafast.org/grafast/).
### Adding plans to fields
```typescript
builder.queryType({
fields: (t) => ({
addTwoNumbers: t.int({
args: {
a: t.arg.int({ required: true }),
b: t.arg.int({ required: true }),
},
plan: (_, { $a, $b }) => {
return lambda([$a, $b], ([a, b]) => a + b);
},
}),
}),
});
```
### Using resolvers
Pothos and Grafast will still allow you to write resolvers when using grafast,
but you will not have access to the 4th `GraphqlResolveInfo` argument:
```typescript
builder.queryType({
fields: (t) => ({
addTwoNumbers: t.int({
args: {
a: t.arg.int({ required: true }),
b: t.arg.int({ required: true }),
},
resolve: (_, { a, b }) => {
return a + b;
},
}),
}),
});
```
Resolvers should not be used to load data, but can make it easier to define a field
that would otherwise use a simple `lambda` plan.
### Abstract types
Abstract types (Unions and Interfaces) may require defining a plan to resolve to the correct type.
For more details on how polymorphic types work in Grafast, see the [Grafast documentation](https://grafast.org/grafast/polymorphism).
#### Interfaces
To implement an interface, you can implement it as you normally would in Pothos, and then call the
`.withPlan` method on the interface ref to provide a plan for resolving the correct type.
```typescript
interface AnimalData {
id: string;
kind: 'Dog' | 'Cat';
}
export const Dog = builder.objectRef('Dog').implement({
interfaces: [Animal],
});
export const Cat = builder.objectRef('Cat').implement({
interfaces: [Animal],
});
export const Animal = builder
.interfaceRef('Animal')
.withPlan({
planType: ($record) => ({
$__typename: get($record, 'kind'),
}),
})
Animal.implement({
fields: (t) => ({
id: t.exposeID('id'),
}),
});
```
You can now define a query to resolve this interface:
```typescript
export const Animals = [
{
id: '1',
kind: 'Dog',
},
{
id: '2',
kind: 'Cat',
},
] satisfies AnimalData[];
function getAnimalsById(ids: readonly string[]): (AnimalData | null)[] {
return ids.map((id) => Animals.find((entity) => entity.id === id) ?? null);
}
builder.queryFields((t) => ({
animal: t.field({
type: Animal,
args: {
id: t.arg.string({ required: true }),
},
plan: (_, $args) => loadOne($args.$id, getAnimalsById),
}),
}));
```
### Unions
Unions can be implemented just like interfaces:
```typescript
interface AlienData {
id: string;
kind: 'Alien';
}
export const Alien = builder.objectRef('Alien').implement({
fields: (t) => ({
id: t.exposeID('id'),
}),
});
export const Entity = builder
.unionType('Entity', {
types: [Cat, Dog, Alien],
})
.withPlan({
planType: ($record) => ({
$__typename: get($record, 'kind'),
}),
});
```
### `planForType`
When planning polymorphic types, Grafast allows you to provide a `planForType` function that
allows you to load the correct data for the current type.
This also enables changing the type of plan required for fields that return the abstract type:
`planForType` is not entirely type-safe, and will allow plans that resolve to data for the wrong type.
This API is likely to change in the future.
```typescript
export const Entity = builder
.unionType('Entity', {
types: [Cat, Dog, Alien],
})
.withPlan({
planType: (
// Provide an explicit type so that the query field only needs to return the ID
$specifier: Step,
) => {
const $record = inhibitOnNull(loadOne($specifier, getEntitiesById));
return {
$__typename: get($record, 'kind'),
planForType: () => $record,
};
},
});
builder.queryFields((t) => ({
entity: t.field({
type: Entity,
args: {
id: t.arg.string({ required: true }),
},
// Because our Entity plan loads the record, we can just return the ID here
plan: (_, $args) => $args.$id,
}),
}));
```
# Plugins
URL: /docs/plugins
List of plugins for Pothos
***
title: Plugins
description: List of plugins for Pothos
---------------------------------------
# Mocks plugin
URL: /docs/plugins/mocks
Mocks plugin docs for Pothos
***
title: Mocks plugin
description: Mocks plugin docs for Pothos
-----------------------------------------
A simple plugin for adding resolver mocks to a GraphQL schema.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-mocks
```
### Setup
```typescript
import MocksPlugin from '@pothos/plugin-mocks';
const builder = new SchemaBuilder({
plugins: [MocksPlugin],
});
```
### Adding mocks
You can mock any field by adding a mock in the options passed to `builder.toSchema` under
`mocks.{typeName}.{fieldName}`.
```typescript
builder.queryType({
fields: (t) => ({
someField: t.string({
resolve: () => {
throw new Error('Not implemented');
},
}),
}),
});
builder.toSchema({
mocks: {
Query: {
someField: (parent, args, context, info) => 'Mock result!',
},
},
});
```
Mocks will replace the resolve functions any time a mocked field is executed. A schema can be built
multiple times with different mocks.
### Adding mocks for subscribe functions
To add a mock for a subscriber you can nest the mocks for subscribe and resolve in an object:
```typescript
builder.subscriptionType({
fields: (t) => ({
someField: t.string({
resolve: () => {
throw new Error('Not implemented');
},
subscribe: () => {
throw new Error('Not implemented');
},
}),
}),
});
builder.toSchema({
mocks: {
Subscription: {
someField: {
resolve: (parent, args, context, info) => 'Mock result!',
subscribe: (parent, args, context, info) => {
/* return a mock async iterator */
},
},
},
},
});
```
# Relay plugin
URL: /docs/plugins/relay
Relay plugin docs for Pothos
***
title: Relay plugin
description: Relay plugin docs for Pothos
-----------------------------------------
The Relay plugin adds a number of builder methods and helper functions to simplify building a relay
compatible schema.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-relay
```
### Setup
```typescript
import RelayPlugin from '@pothos/plugin-relay';
const builder = new SchemaBuilder({
plugins: [RelayPlugin],
relay: {},
});
```
### Options
The `relay` options object passed to builder can contain the following properties:
* `idFieldName`: The name of the field that contains the global id for the node. Defaults to `id`.
* `idFieldOptions`: Options to pass to the id field.
* `clientMutationId`: `omit` (default) | `required` | `optional`. Determines if clientMutationId
fields are created on `relayMutationFields`, and if they are required.
* `relayMutationFieldOptions`: Default options for the `relayMutationField` method.
* `cursorType`: `String` | `ID`. Determines type used for cursor fields. Defaults to `String`
* `nodeQueryOptions`: Options for the `node` field on the query object, set to false to omit the
field
* `nodesQueryOptions`: Options for the `nodes` field on the query object, set to false to omit the
field
* `nodeTypeOptions`: Options for the `Node` interface type
* `pageInfoTypeOptions`: Options for the `TypeInfo` object type
* `clientMutationIdFieldOptions`: Options for the `clientMutationId` field on connection objects
* `clientMutationIdInputOptions`: Options for the `clientMutationId` input field on connections
fields
* `mutationInputArgOptions`: Options for the Input object created for each connection field
* `cursorFieldOptions`: Options for the `cursor` field on an edge object.
* `nodeFieldOptions`: Options for the `node` field on an edge object.
* `edgesFieldOptions`: Options for the `edges` field on a connection object.
* `pageInfoFieldOptions`: Options for the `pageInfo` field on a connection object.
* `hasNextPageFieldOptions`: Options for the `hasNextPage` field on the `PageInfo` object.
* `hasPreviousPageFieldOptions`: Options for the `hasPreviousPage` field on the `PageInfo` object.
* `startCursorFieldOptions`: Options for the `startCursor` field on the `PageInfo` object.
* `endCursorFieldOptions`: Options for the `endCursor` field on the `PageInfo` object.
* `beforeArgOptions`: Options for the `before` arg on a connection field.
* `afterArgOptions`: Options for the `after` arg on a connection field.
* `firstArgOptions`: Options for the `first` arg on a connection field.
* `lastArgOptions`: Options for the `last` arg on a connection field.
* `defaultConnectionTypeOptions`: Default options for the `Connection` Object types.
* `defaultEdgeTypeOptions`: Default options for the `Edge` Object types.
* `defaultPayloadTypeOptions`: Default options for the `Payload` Object types.
* `defaultMutationInputTypeOptions`: default options for the mutation `Input` types.
* `nodesOnConnection`: If true, the `nodes` field will be added to the `Connection` object types.
* `defaultConnectionFieldOptions`: Default options for connection fields defined with t.connection
* `brandLoadedObjects`: Defaults to `true`. This will add a hidden symbol to objects returned from
the `load` methods of Nodes that allows the default `resolveType` implementation to identify the
type of the node. When this is enabled, you will not need to implement an `isTypeOf` check for
most common patterns.
### Creating Nodes
To create objects that extend the `Node` interface, you can use the new `builder.node` method.
```typescript
// Using object refs
const User = builder.objectRef('User');
// Or using a class
class User {
id: string;
name: string;
}
builder.node(User, {
// define an id field
id: {
resolve: (user) => user.id,
// other options for id field can be added here
},
// Define only one of the following methods for loading nodes by id
loadOne: (id) => loadUserByID(id),
loadMany: (ids) => loadUsers(ids),
loadWithoutCache: (id) => loadUserByID(id),
loadManyWithoutCache: (ids) => loadUsers(ids),
// if using a class instaed of a ref, you will need to provide a name
name: 'User',
fields: (t) => ({
name: t.exposeString('name'),
}),
});
```
`builder.node` will create an object type that implements the `Node` interface. It will also create
the `Node` interface the first time it is used. The `resolve` function for `id` should return a
number or string, which will be converted to a globalID. The relay plugin adds to new query fields
`node` and `nodes` which can be used to directly fetch nodes using global IDs by calling the
provided `loadOne` or `loadMany` method. Each node will only be loaded once by id, and cached if the
same node is loaded multiple times inn the same request. You can provide `loadWithoutCache` or
`loadManyWithoutCache` instead if caching is not desired, or you are already using a caching
datasource like a dataloader.
Nodes may also implement an `isTypeOf` method which can be used to resolve the correct type for
lists of generic nodes. When using a class as the type parameter, the `isTypeOf` method defaults to
using an `instanceof` check, and falls back to checking the constructor property on the prototype.
The means that for many cases if you are using classes in your type parameters, and all your values
are instances of those classes, you won't need to implement an `isTypeOf` method, but it is usually
better to explicitly define that behavior.
By default (unless `brandLoadedObjects` is set to `false`) any nodes loaded through one of the
`load*` methods will be branded so that the default `resolveType` method can identify the GraphQL
type for the loaded object. This means `isTypeOf` is only required for `union` and `interface`
fields that return node objects that are manually loaded, where the union or interface does not have
a custom `resolveType` method that knows how to resolve the node type.
#### parsing node ids
By default all node ids are parsed as string. This behavior can be customized by providing a custom
parse function for your node's ID field:
```ts
const User = builder.objectRef('User')
builder.node(User, {
// define an id field
id: {
resolve: (user) => user.id,
parse: (id) => Number.parseInt(id, 10),
},
// the ID is now a number
loadOne: (id) => loadUserByID(id),
...
});
```
### Global IDs
To make it easier to create globally unique ids the relay plugin adds new methods for creating
globalID fields.
```typescript
import { encodeGlobalID } from '@pothos/plugin-relay';
builder.queryFields((t) => ({
singleID: t.globalID({
resolve: (parent, args, context) => {
return { id: 123, type: 'SomeType' };
},
}),
listOfIDs: t.globalIDList({
resolve: (parent, args, context) => {
return [{ id: 123, type: 'SomeType' }];
},
}),
}));
```
The returned IDs can either be a string (which is expected to already be a globalID), or an object
with the an `id` and a `type`, The type can be either the name of a name as a string, or any object
that can be used in a type parameter.
There are also new methods for adding globalIDs in arguments or fields of input types:
```typescript
builder.queryType({
fields: (t) => ({
fieldThatAcceptsGlobalID: t.boolean({
args: {
id: t.arg.globalID({
required: true,
}),
idList: t.arg.globalIDList(),
},
resolve(parent, args) {
console.log(`Get request for type ${args.id.typename} with id ${args.id.id}`);
return true;
},
}),
}),
});
```
globalIDs used in arguments expect the client to send a globalID string, but will automatically be
converted to an object with 2 properties (`id` and `typename`) before they are passed to your
resolver in the arguments object.
#### Limiting global ID args to specific types
`globalID` input's can be configured to validate the type of the globalID. This is useful if you
only want to accept IDs for specific node types.
```typescript
builder.queryType({
fields: (t) => ({
fieldThatAcceptsGlobalID: t.boolean({
args: {
id: t.arg.globalID({
for: SomeType,
// or allow multiple types
for: [TypeOne, TypeTwo],
required: true,
}),
},
}),
}),
});
```
### Creating Connections
The `t.connection` field builder method can be used to define connections. This method will
automatically create the `Connection` and `Edge` objects used by the connection, and add `before`,
`after`, `first`, and `last` arguments. The first time this method is used, it will also create the
`PageInfo` type.
```typescript
builder.queryFields((t) => ({
numbers: t.connection(
{
type: NumberThing,
resolve: (parent, { first, last, before, after }) => {
return {
pageInfo: {
hasNextPage: false,
hasPreviousPage: false,
startCursor: 'abc',
endCursor: 'def',
},
edges: [
{
cursor: 'abc',
node: new NumberThing(123),
},
{
cursor: 'def',
node: new NumberThing(123),
},
],
};
},
},
{
name: 'NameOfConnectionType', // optional, will use ParentObject + capitalize(FieldName) + "Connection" as the default
fields: (tc) => ({
// define extra fields on Connection
// We need to use a new variable for the connection field builder (eg tc) to get the correct types
}),
edgesField: {}, // optional, allows customizing the edges field on the Connection Object
// Other options for connection object can be added here
},
{
// Same as above, but for the Edge Object
name: 'NameOfEdgeType', // optional, will use Connection name + "Edge" as the default
fields: (te) => ({
// define extra fields on Edge
// We need to use a new variable for the connection field builder (eg te) to get the correct types
}),
nodeField: {}, // optional, allows customizing the node field on the Edge Object
},
),
}));
```
Manually implementing connections can be cumbersome, so there are a couple of helper methods that
can make resolving connections a little easier.
For limit/offset based apis:
```typescript
import { resolveOffsetConnection } from '@pothos/plugin-relay';
builder.queryFields((t) => ({
things: t.connection({
type: SomeThing,
resolve: (parent, args) => {
return resolveOffsetConnection({ args }, ({ limit, offset }) => {
return getThings(offset, limit);
});
},
}),
}));
```
`resolveOffsetConnection` has a few default limits to prevent unintentionally allowing too many
records to be fetched at once. These limits can be configure using the following options:
```typescript
{
args: ConnectionArguments;
defaultSize?: number; // defaults to 20
maxSize?: number; // defaults to 100
totalCount?: number // required to support using `last` without `before`
}
```
For APIs where you have the full array available you can use `resolveArrayConnection`, which works
just like `resolveOffsetConnection` and accepts the same options.
```typescript
import { resolveArrayConnection } from '@pothos/plugin-relay';
builder.queryFields((t) => ({
things: t.connection({
type: SomeThings,
resolve: (parent, args) => {
return resolveArrayConnection({ args }, getAllTheThingsAsArray());
},
}),
}));
```
Cursor based pagination can be implemented using the `resolveCursorConnection` method. The following
example uses prisma, but a similar solution should work with any data store that supports limits,
ordering, and filtering.
```typescript
import { resolveCursorConnection, ResolveCursorConnectionArgs } from '@pothos/plugin-relay';
builder.queryField('posts', (t) =>
t.connection({
type: Post,
resolve: (_, args) =>
resolveCursorConnection(
{
args,
toCursor: (post) => post.createdAt.toISOString(),
},
// Manually defining the arg type here is required
// so that typescript can correctly infer the return value
({ before, after, limit, inverted }: ResolveCursorConnectionArgs) =>
prisma.post.findMany({
take: limit,
where: {
createdAt: {
lt: before,
gt: after,
},
},
orderBy: {
createdAt: inverted ? 'desc' : 'asc',
},
}),
),
}),
);
```
### Relay Mutations
You can use the `relayMutationField` method to define relay compliant mutation fields. This method
will generate a mutation field, an input object with a `clientMutationId` field, and an output
object with the corresponding `clientMutationId`.
Example usage:
```typescript
builder.relayMutationField(
'deleteItem',
{
inputFields: (t) => ({
id: t.id({
required: true,
}),
}),
},
{
nullable: false, // You can optionally change the nullability of the mutation field here
resolve: async (root, args, ctx) => {
if (ctx.items.has(args.input.id)) {
ctx.items.delete(args.input.id);
return { success: true };
}
return { success: false };
},
},
{
outputFields: (t) => ({
success: t.boolean({
resolve: (result) => result.success,
}),
}),
},
);
```
Which produces the following graphql types:
```graphql
input DeleteItemInput {
clientMutationId: ID!
id: ID!
}
type DeleteItemPayload {
clientMutationId: ID!
success: Boolean
}
type Mutation {
deleteItem(input: DeleteItemInput!): DeleteItemPayload!
}
```
The `relayMutationField` has 4 arguments:
* `name`: Name of the mutation field
* `inputOptions`: Options for the `input` object or a ref to an existing input object
* `fieldOptions`: Options for the mutation field
* `payloadOptions`: Options for the Payload object
The `inputOptions` has a couple of non-standard options:
* `name` which can be used to set the name of the input object
* `argName` which can be used to overwrite the default arguments name (`input`).
The `payloadOptions` object also accepts a `name` property for setting the name of the payload
object.
You can also access refs for the created input and payload objects so you can re-use them in other
fields:
```typescript
// Using aliases when destructuring lets you name your refs rather than using the generic `inputType` and `payloadType`
const { inputType: DeleteItemInput, payloadType: DeleteItemPayload } = builder.relayMutationField(
'deleteItem',
...
);
```
### Reusing connection objects
In some cases you may want to create a connection object type that is shared by multiple fields. To
do this, you will need to create the connection object separately and then create a fields using a
ref to your connection object:
```typescript
import { resolveOffsetConnection } from '@pothos/plugin-relay';
const ThingsConnection = builder.connectionObject(
{
// connection options
type: SomeThing,
name: 'ThingsConnection',
},
{
// Edge options (optional)
name: 'ThingsEdge', // defaults to Appending `Edge` to the Connection name
},
);
// You can use connection object with normal fields
builder.queryFields((t) => ({
things: t.field({
type: ThingsConnection,
args: {
...t.arg.connectionArgs(),
},
resolve: (parent, args) => {
return resolveOffsetConnection({ args }, ({ limit, offset }) => {
return getThings(offset, limit);
});
},
}),
}));
// Or by providing the connection type when creating a connection field
builder.queryFields((t) => ({
things: t.connection({
resolve: (parent, args) => {
return resolveOffsetConnection({ args }, ({ limit, offset }) => {
return getThings(offset, limit);
});
},
}),
ThingsConnection,
}));
```
`builder.connectionObject` creates the connect object type and the associated Edge type.
`t.arg.connectionArgs()` will create the default connection args.
### Reusing edge objects
Similarly you can directly create and re-use edge objects
```typescript
import { resolveOffsetConnection } from '@pothos/plugin-relay';
const ThingsEdge = builder.edgeObject(
{
name: 'ThingsEdge',
type: SomeThing,
},
);
// The edge object can be used when creating a connection object
const ThingsConnection = builder.connectionObject(
{
type: SomeThing,
name: 'ThingsConnection',
},
ThingsEdge,
);
// Or when creating a connection field
builder.queryFields((t) => ({
things: t.connection({
resolve: (parent, args) => {
return resolveOffsetConnection({ args }, ({ limit, offset }) => {
return getThings(offset, limit);
});
},
}),
{
// connection options
},
ThingsEdge,
}));
```
`builder.connectionObject` creates the connect object type and the associated Edge type.
`t.arg.connectionArgs()` will create the default connection args.
### Expose nodes
The `t.node` and `t.nodes` methods can be used to add additional node fields. the expected return
values of `id` and `ids` fields is the same as the resolve value of `t.globalID`, and can either be
a globalID or an object with and an `id` and a `type`.
Loading nodes by `id` uses a request cache, so the same node will only be loaded once per request,
even if it is used multiple times across the schema.
```typescript
builder.queryFields((t) => ({
extraNode: t.node({
id: () => 'TnVtYmVyOjI=',
}),
moreNodes: t.nodeList({
ids: () => ['TnVtYmVyOjI=', { id: 10, type: 'SomeType' }],
}),
}));
```
### decoding and encoding global ids
The relay plugin exports `decodeGlobalID` and `encodeGlobalID` as helper methods for interacting
with global IDs directly. If you accept a global ID as an argument you can use the `decodeGlobalID`
function to decode it:
```typescript
builder.mutationFields((t) => ({
updateThing: t.field({
type: Thing,
args: {
id: t.args.id({ required: true }),
update: t.args.string({ required: true }),
},
resolve(parent, args) {
const { type, id } = decodeGlobalID(args.id);
const thing = Thing.findById(id);
thing.update(args.update);
return thing;
},
}),
}));
```
### Using custom encoding for global ids
In some cases you may want to encode global ids differently than the build in ID encoding. To do
this, you can pass a custom encoding and decoding function into the relay options of the builder:
```typescript
import RelayPlugin from '@pothos/plugin-relay';
const builder = new SchemaBuilder({
plugins: [RelayPlugin],
relayOptions: {
encodeGlobalID: (typename: string, id: string | number | bigint) => `${typename}:${id}`,
decodeGlobalID: (globalID: string) => {
const [typename, id] = globalID.split(':');
return { typename, id };
},
},
});
```
### Using custom resolve for node and or nodes field
If you need to customize how nodes are loaded for the `node` and or `nodes` fields you can provide
custom resolve functions in the builder options for these fields:
```typescript
import RelayPlugin from '@pothos/plugin-relay';
function customUserLoader({ id, typename }: { id: string; typename: string }) {
// load user
}
const builder = new SchemaBuilder({
plugins: [RelayPlugin],
relayOptions: {
nodeQueryOptions: {
resolve: (root, { id }, context, info, resolveNode) => {
// use custom loading for User nodes
if (id.typename === 'User') {
return customUserLoader(id);
}
// fallback to normal loading for everything else
return resolveNode(id);
},
},
nodesQueryOptions: {
resolve: (root, { ids }, context, info, resolveNodes) => {
return ids.map((id) => {
if (id.typename === 'User') {
return customNodeLoader(id);
}
// it would be more efficient to load all the nodes at once
// but it is important to ensure the resolver returns nodes in the right order
// we are resolving nodes one at a time here for simplicity
return resolveNodes([id]);
});
},
},
},
});
```
### Extending all connections
There are 2 builder methods for adding fields to all connection objects: `t.globalConnectionField`
and `t.globalConnectionFields`. These methods work like many of the other methods on the builder for
adding fields to objects or interfaces.
```typescript
builder.globalConnectionField('totalCount', (t) =>
t.int({
nullable: false,
resolve: (parent) => 123,
}),
);
// Or
builder.globalConnectionFields((t) => ({
totalCount: t.int({
nullable: false,
resolve: (parent) => 123,
}),
}));
```
In the above example, we are just returning a static number for our `totalCount` field. To make this
more useful, we need to have our resolvers for each connection actually return an object that
contains a totalCount for us. To guarantee that resolvers correctly implement this behavior, we can
define custom properties that must be returned from connection resolvers when we set up our builder:
```typescript
import RelayPlugin from '@pothos/plugin-relay';
const builder = new SchemaBuilder<{
Connection: {
totalCount: number;
};
}>({
plugins: [RelayPlugin],
relayOptions: {},
});
```
Now typescript will ensure that objects returned from each connection resolver include a totalCount
property, which we can use in our connection fields:
```typescript
builder.globalConnectionField('totalCount', (t) =>
t.int({
nullable: false,
resolve: (parent) => parent.totalCount,
}),
);
```
Note that adding additional required properties will make it harder to use the provided connection
helpers since they will not automatically return your custom properties. You will need to manually
add in any custom props after getting the result from the helpers:
```typescript
builder.queryFields((t) => ({
posts: t.connection({
type: Post,
resolve: (parent, args, context) => {
const postsArray = context.Posts.getAll();
const result = resolveArrayConnection({ args }, postsArray);
return result && { totalCount: postsArray.length, ...result };
},
}),
}));
```
### Changing nullability of edges and nodes
If you want to change the nullability of the `edges` field on a `Connection` or the `node` field on
an `Edge` you can configure this in 2 ways:
#### Globally
```typescript
import RelayPlugin from '@pothos/plugin-relay';
const builder = new SchemaBuilder<{
DefaultEdgesNullability: false;
DefaultNodeNullability: true;
}>({
plugins: [RelayPlugin],
relayOptions: {
edgesFieldOptions: {
nullable: false,
},
nodeFieldOptions: {
nullable: true,
},
},
});
```
The types provided for `DefaultEdgesNullability` and `DefaultNodeNullability` must match the values
provided in the nullable option of `edgesFieldOptions` and `nodeFieldOptions` respectively. This
will set the default nullability for all connections created by your builder.
nullability for `edges` fields defaults to `{ list: options.defaultFieldNullability, items: true }`
and the nullability of `node` fields is the same as `options.defaultFieldNullability` (which
defaults to `true`).
#### Per connection
```typescript
builder.queryFields((t) => ({
things: t.connection({
type: SomeThings,
edgesNullable: {
items: true,
list: false,
},
nodeNullable: false,
resolve: (parent, args) => {
return resolveOffsetConnection({ args }, ({ limit, offset }) => {
return getThings(offset, limit);
});
},
}),
}));
// Or
const ThingsConnection = builder.connectionObject({
type: SomeThing,
name: 'ThingsConnection',
edgesNullable: {
items: true,
list: false,
},
nodeNullable: false,
});
```
### Extending the `Node` interface
Use the `nodeInterfaceRef` method of your Builder.
For example, to add a new derived field on the interface:
```ts
builder.interfaceField(builder.nodeInterfaceRef(), 'extra', (t) =>
t.string({
resolve: () => 'it works',
}),
);
```
# Auth plugin
URL: /docs/plugins/scope-auth
Auth plugin docs for Pothos
***
title: Auth plugin
description: Auth plugin docs for Pothos
----------------------------------------
The scope auth plugin aims to be a general purpose authorization plugin that can handle a wide
variety of authorization use cases, while incurring a minimal performance overhead.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-scope-auth
```
#### IMPORTANT
When using `scope-auth` with other plugins, the `scope-auth` plugin should generally be listed first
to ensure that other plugins that wrap resolvers do not execute before the `scope-auth` logic.
However, exceptions do exist where it is desirable for a plugin to run before `scope-auth`. For
instance, putting the [relay plugin](https://pothos-graphql.dev/docs/plugins/relay) before the
`scope-auth` plugin results in the `authScopes` function correctly receiving parsed `globalID`s.
### Setup
```typescript
import SchemaBuilder from '@pothos/core';
import ScopeAuthPlugin from '@pothos/plugin-scope-auth';
type MyPerms = 'readStuff' | 'updateStuff' | 'readArticle';
const builder = new SchemaBuilder<{
// Types used for scope parameters
AuthScopes: {
public: boolean;
employee: boolean;
deferredScope: boolean;
customPerm: MyPerms;
};
}>({
plugins: [ScopeAuthPlugin],
scopeAuth: {
// Recommended when using subscriptions
// when this is not set, auth checks are run when event is resolved rather than when the subscription is created
authorizeOnSubscribe: true,
// scope initializer, create the scopes and scope loaders for each request
authScopes: async (context) => ({
public: !!context.User,
// eagerly evaluated scope
employee: await context.User.isEmployee(),
// evaluated when used
deferredScope: () => context.User.isEmployee(),
// scope loader with argument
customPerm: (perm) => context.permissionService.hasPermission(context.User, perm),
}),
},
});
```
In the above setup, We import the `scope-auth` plugin, and include it in the builders plugin list.
We also define 2 important things:
1. The `AuthScopes` type in the builder `SchemaTypes`. This is a map of types that defines the types
used by each of your scopes. We'll see how this is used in more detail below.
2. The `scope initializer` function, which is the implementation of each of the scopes defined in
the type above. This function returns a map of either booleans (indicating if the request has
the scope) or functions that load the scope (with an optional parameter).
The names of the scopes (`public`, `employee`, `deferredScope`, and `customPerm`) are all
arbitrary, and are not part of the plugin. You can use whatever scope names you prefer, and can add
as many you need.
### Using a scope on a field
```typescript
builder.queryType({
fields: (t) => ({
message: t.string({
authScopes: {
public: true,
},
resolve: () => 'hi',
}),
}),
});
```
## Terminology
A lot of terms around authorization are overloaded, and can mean different things to different
people. Here is a short list of a few terms used in this document, and how they should be
interpreted:
* `scope`: A scope is the unit of authorization that can be used to authorize a request to resolve a
field.
* `scope map`: A map of scope names and scope parameters. This defines the set of scopes that will
be checked for a field or type to authorize the request the resolve a resource.
* `scope loader`: A function for dynamically loading scope given a scope parameter. Scope loaders
are ideal for integrating with a permission service, or creating scopes that can be customized
based on the field or values that they are authorizing.
* `scope parameter`: A parameter that will be passed to a scope loader. These are the values in the
authScopes objects.
* `scope initializer`: The function that creates the scopes or scope loaders for the current
request.
While this plugin uses `scopes` as the term for its authorization mechanism, this plugin can easily
be used for role or permission based schemes, and is not intended to dictate a specific philosophy
around how to authorize requests/access to resources.
## Use cases
Examples below assume the following builder setup:
```typescript
const builder = new SchemaBuilder<{
// Types used for scope parameters
AuthScopes: {
public: boolean;
employee: boolean;
deferredScope: boolean;
customPerm: MyPerms;
};
}>({
plugins: [ScopeAuthPlugin],
authScopes: async (context) => ({
public: !!context.User,
employee: await context.User.isEmployee(),
deferredScope: () => context.User.isEmployee(),
customPerm: (perm) => context.permissionService.hasPermission(context.User, perm),
}),
});
```
### Top level auth on queries and mutations
To add an auth check to root level queries or mutations, add authScopes to the field options:
```typescript
builder.queryType({
fields: (t) => ({
internalMessage: t.string({
authScopes: {
employee: true,
},
resolve: () => 'hi',
}),
}),
});
```
This will require the requests to have the `employee` scope. Adding multiple scopes to the
`authScopes` object will check all the scopes, and if the user has any of the scopes, the request
will be considered authorized for the current field. Subscription and Mutation root fields work the
same way.
### Auth on nested fields
Fields on nested objects can be authorized the same way scopes are authorized on the root types.
```typescript
builder.objectType(Article, {
fields: (t) => ({
title: t.exposeString('title', {
authScopes: {
employee: true,
},
}),
}),
});
```
### Default auth for all fields on types
To apply the same scope requirements to all fields on a type, you can define an `authScope` map in
the type options rather than on the individual fields.
```typescript
builder.objectType(Article, {
authScopes: {
public: true,
},
fields: (t) => ({
title: t.exposeString('title', {}),
content: t.exposeString('content', {}),
}),
});
```
### Overwriting default auth on field
In some cases you may want to use default auth scopes for a type, but need to change the behavior
for one specific field.
To add additional requirements for a specific field you can simply add additional scopes on the
field itself.
```typescript
builder.objectType(Article, {
authScopes: {
public: true,
},
fields: (t) => ({
title: t.exposeString('title', {}),
viewCount: t.exposeInt('viewCount', {
authScopes: {
employee: true,
},
}),
}),
});
```
To remove the type level scopes for a field, you can use the `skipTypeScopes` option:
```typescript
builder.objectType(Article, {
authScopes: {
public: true,
},
fields: (t) => ({
title: t.exposeString('title', {
skipTypeScopes: true,
}),
content: t.exposeString('title', {}),
}),
});
```
This will allow non-logged in users to resolve the title, but not the content of an Article.
`skipTypeScopes` can be used in conjunction with `authScopes` on a field to completely overwrite the
default scopes.
### Running scopes on types rather than fields
By default, all auth scopes are tested before a field resolves. This includes both scopes defined on
a type and scopes defined on a field. When scopes for a `type` fail, you will end up with an error
for each field of that type. Type level scopes are only executed once, but the errors are emitted
for each affected field.
The behavior may not be desirable for all users. You can set `runScopesOnType` to true, either on
object types, or in the `scopeAuth` options of the builder
```typescript
const builder = new SchemaBuilder<{
Context: Context;
AuthScopes: {
loggedIn: boolean;
};
}>({
scopeAuth: {
// Affects all object types (Excluding Query, Mutation, and Subscription)
runScopesOnType: true,
authScopes: async (context) => ({
loggedIn: !!context.User,
}),
},
plugins: [ScopeAuthPlugin],
});
builder.objectType(Article, {
runScopesOnType: true,
authScopes: {
readArticle: true,
},
fields: (t) => ({
title: t.exposeString('title', {
// this will not have any effect because type scopes are not evaluated at the field level
skipTypeScopes: true,
}),
content: t.exposeString('title', {}),
}),
});
```
Enabling this has a couple of limitations:
1. THIS DOES NOT CURRENTLY WORK WITH `graphql-jit`. This option uses the `isTypeOf` function, but
`graphql-jit` does not support async `isTypeOf`, and also does not correctly pass the context
object to the isTypeOf checks. Until this is resolved, this option will not work with
`graphql-jit`.
2. Fields of types that set `runScopesOnType` to true will not be able to use `skipTypeScopes` or
`skipInterfaceScopes`.
### Generalized auth functions with field specific arguments
The scopes we have covered so far have all been related to information that applies to a full
request. In more complex applications you may not make sense to enumerate all the scopes a request
is authorized for ahead of time. To handle these cases you can define a scope loader which takes a
parameter and dynamically determines if a request is authorized for a scope using that parameter.
One common example of this would be a permission service that can check if a user or request has a
certain permission, and you want to specify the specific permission each field requires.
```typescript
builder.queryType({
fields: (t) => ({
articles: t.field({
type: [Article],
authScopes: {
customPerm: 'readArticle',
},
resolve: () => Article.getSome(),
}),
}),
});
```
In the example above, the authScope map uses the customPerm scope loader with a parameter of
`readArticle`. The first time a field requests this scope, the customPerm loader will be called with
`readArticle` as its argument. This scope will be cached, so that if multiple fields request the
same scope, the scope loader will still only be called once.
The types for the parameters you provide for each scope are based on the types provided to the
builder in the `AuthScopes` type.
### Customizing error messages
Error messages (and error instances) can be customized either globally or on specific fields.
#### Globally
```typescript
const builder = new SchemaBuilder<{
Context: Context;
AuthScopes: {
loggedIn: boolean;
};
}>({
scopeAuth: {
treatErrorsAsUnauthorized: true,
unauthorizedError: (parent, context, info, result) => new Error(`Not authorized`),
authScopes: async (context) => ({
loggedIn: !!context.User,
}),
},
plugins: [ScopeAuthPlugin],
});
```
The `unauthorizedError` callback will be called with the parent, context, and info object of the
unauthorized field. It will also include a 4th argument `result` that has the default message for
this type of failure, and a `failure` property with some details about what caused the field to be
unauthorized. This callback can either return an `Error` instance (or an instance of a class that
extends `Error`), or a `string`. If a string is returned, it will be converted to a
`ForbiddenError`.
The `treatErrorsAsUnauthorized` option changes how errors in authorization functions are handled. By
default errors are not caught by the plugin, and will act as if thrown directly from the resolver.
This means that thrown errors bypass the `unauthorizedError` callback, and will cause requests to
fail even when another scope in an `$any` passes.
Setting `treatErrorsAsUnauthorized` will cause errors to be caught and treated as if the scope was
not authorized.
#### Surfacing errors thrown in authorization checks
When `treatErrorsAsUnauthorized` is set to true, errors are caught and attached to the `result`
object in the `unauthorizedError` callback. This allows you to surface the error to the client.
For example, if you want to re-throw errors thrown by authorization functions you could do this by
writing a custom `unauthorizedError` callback like this:
```typescript
import SchemaBuilder from '@pothos/core';
import ScopeAuthPlugin, { AuthFailure, AuthScopeFailureType } from '@pothos/plugin-scope-auth';
// Find the first error and re-throw it
function throwFirstError(failure: AuthFailure) {
// Check if the failure has an error attached to it and re-throw it
if ('error' in failure && failure.error) {
throw failure.error;
}
// Loop over any/all scopes and see if one of their children has an error to throw
if (
failure.kind === AuthScopeFailureType.AnyAuthScopes ||
failure.kind === AuthScopeFailureType.AllAuthScopes
) {
for (const child of failure.failures) {
throwFirstError(child);
}
}
}
const builder = new SchemaBuilder<{
Context: Context;
AuthScopes: {
loggedIn: boolean;
};
}>({
scopeAuth: {
treatErrorsAsUnauthorized: true,
unauthorizedError: (parent, context, info, result) => {
// throw an error if it's found
throwFirstError(result.failure);
// throw a fallback error if no error was found
return new Error(`Not authorized`);
},
},
plugins: [ScopeAuthPlugin],
authScopes: async (context) => ({
loggedIn: !!context.User,
}),
});
```
#### On individual fields
```typescript
builder.queryType({
fields: (t) => ({
example: t.string({
authScopes: { loggedIn: true },
unauthorizedError: (parent, args, context, info, result) =>
new Error("You must be logged in to query the 'example' field"),
resolve: () => 'example',
}),
}),
});
```
### Returning a custom value when unauthorized
In some cases you may want to return null, and empty array, throw a custom error, or return a custom
result when a user is not authorized. To do this you can add a `unauthorizedResolver` option to your
field.
```typescript
builder.queryType({
fields: (t) => ({
articles: t.field({
type: [Article],
authScopes: {
customPerm: 'readArticle',
},
resolve: () => Article.getSome(),
unauthorizedResolver: () => [],
}),
}),
});
```
In the example above, if a user is not authorized they will simply receive an empty array in the
response. The `unauthorizedResolver` option takes the same arguments as a resolver, but also
receives a 5th argument that is an instance of `ForbiddenError`.
### Setting scopes that apply for a full request
We have already seen several examples of this. For scopes that apply to a full request like `public`
or `employee`, rather than using a scope loader, the scope initializer can simply use a boolean to
indicate if the request has the given scope. If you know ahead of time that a scope loader will
always return false for a specific request, you can do something like the following to avoid the
additional overhead of running the loader:
```typescript
const builder = new SchemaBuilder<{
AuthScopes: {
humanPermission: string;
};
}>({
plugins: [ScopeAuthPlugin],
authScopes: async (context) => ({
humanPermission: context.user.isHuman() ? (perm) => context.user.hasPermission(perm) : false,
}),
});
```
This will ensure that if a request accesses a field that requests a `humanPermission` scope, and the
request is made by another service or bot, we don't have to run the `hasPermission` check at all for
those requests, since we know it would return false anyways.
### Change context types based on scopes
Sometimes you need to change your context typings depending on the applied scopes. You can provide
custom context for your defined scopes and use the `authField` method to access the custom context:
```typescript
type Context = {
user: User | null;
};
const builder = new SchemaBuilder<{
Context: Context;
AuthScopes: {
loggedIn: boolean;
};
AuthContexts: {
loggedIn: Context & { user: User };
};
}>({
plugins: [ScopeAuthPlugin],
authScopes: async (context) => ({
loggedIn: !!context.user,
}),
});
builder.queryField('currentId', (t) =>
t.authField({
type: 'ID',
authScopes: {
loggedIn: true,
},
resolve: (parent, args, context) => context.user.id,
}),
);
```
Some plugins contribute field builder methods with additional functionality that may not work with
`t.authField`. In order to work with those methods, there is also a `t.withAuth` method that can be
used to return a field builder with authScopes predefined.
```typescript
type Context = {
user: User | null;
};
const builder = new SchemaBuilder<{
Context: Context;
AuthScopes: {
loggedIn: boolean;
};
AuthContexts: {
loggedIn: Context & { user: User };
};
}>({
plugins: [ScopeAuthPlugin],
authScopes: async (context) => ({
loggedIn: !!context.user,
}),
});
builder.queryField('viewer', (t) =>
t
.withAuth({
loggedIn: true,
})
.prismaField({
type: User,
resolve: (query, root, args, ctx) =>
prisma.findUniqueOrThrow({
...query,
where: { id: ctx.user.id },
}),
}),
);
```
### Logical operations on auth scopes (any/all)
By default the scopes in a scope map are evaluated in parallel, and if the request has any of
the requested scopes, the field will be resolved. In some cases, you may want to require multiple
scopes:
```typescript
builder.objectType(Article, {
fields: (t) => ({
title: t.exposeString('title', {}),
viewCount: t.exposeInt('viewCount', {
authScopes: {
$all: {
$any: {
employee: true,
deferredScope: true,
},
public: true,
},
},
}),
}),
});
```
You can use the built in `$any` and `$all` scope loaders to combine requirements for scopes. The
above example requires a request to have either the `employee` or `deferredScope` scopes, and the
`public` scope. `$any` and `$all` each take a scope map as their parameters, and can be nested
inside each other.
You can change the default strategy used for top level auth scopes by setting the `defaultStrategy`
option in the builder (defaults to `any`):
```typescript
const builder = new SchemaBuilder<{
Context: {
user: User | null;
};
AuthScopes: {
loggedIn: boolean;
};
DefaultAuthStrategy: 'all';
}>({
plugins: [ScopeAuthPlugin],
scopeAuthOptions: {
defaultStrategy: 'all',
},
authScopes: async (context) => ({
loggedIn: !!context.user,
}),
});
```
### Auth that depends on parent value
For cases where the required scopes depend on the value of the requested resource you can use a
function in the `authScopes` option that returns the scope map for the field.
```typescript
builder.objectType(Article, {
fields: (t) => ({
viewCount: t.exposeInt('viewCount', {
authScopes: (article, args, context, info) => {
if (context.User.id === article.author.id) {
// If user is author, let them see it
// returning a boolean lets you set auth without specifying other scopes to check
return true;
}
// If the user is not the author, require the employee scope
return {
employee: true,
};
},
}),
}),
});
```
authScope functions on fields will receive the same arguments as the field resolver, and will be
called each time the resolve for the field would be called. This means the same authScope function
could be called multiple time for the same resource if the field is requested multiple times using
an alias.
Returning a boolean from an auth scope function is an easy way to allow or disallow a request from
resolving a field without needing to evaluate additional scopes.
### Setting type level scopes based on the parent value
You can also use a function in the authScope option for types. This function will be invoked with
the parent and the context as its arguments, and should return a scope map.
```typescript
builder.objectType(Article, {
authScope: (parent, context) => {
if (parent.isPublished()) {
return {
public: true,
};
}
return {
employee: true,
};
},
fields: (t) => ({
title: t.exposeString('title', {}),
}),
});
```
The above example uses an authScope function to prevent the fields of an article from being loaded
by non employees unless they have been published.
### Setting scopes based on the return value of a field
This is a use that is not currently supported. The current work around is to move those checks down
to the returned type. Combining this with `runScopesOnType` should work for most cases.
### Granting access to a resource based on how it is accessed
In some cases, you may want to grant a request scopes to access certain fields on a child type. To
do this you can use `$granted` scopes.
```typescript
builder.queryType({
fields: (t) => ({
freeArticle: t.field({
grantScopes: ['readArticle'],
// or
grantScopes: (parent, args, context, info) => ['readArticle'],
}),
}),
});
builder.objectType(Article, {
authScopes: {
public: true,
$granted: 'readArticle',
}
fields: (t) => ({
title: t.exposeString('title', {}),
}),
});
```
In the above example, the fields of the `Article` type normally require the `public` scope granted
to logged in users, but can also be accessed with the `$granted` scope `readArticle`. This means
that if the field that returned the Article "granted" the scope, the article can be read. The
`freeArticle` field on the `Query` type grants this scope, allowing anyone querying that field to
access fields of the free article. `$granted` scopes are separate from other scopes, and do not give
a request access to normal scopes of the same name. `$granted` scopes are also not inherited by
nested children, and would need to be explicitly passed down for each field if you wanted to grant
access to nested children.
### Reusing checks for multiple, but not all fields
You may have cases where groups of fields on a type are accessible using some shared condition. This
is another case where `$granted` scopes can be helpful.
```typescript
builder.objectType(Article, {
grantScopes: (article, context) => {
if (context.User.id === article.author.id) {
return ['author', 'readArticle'];
}
if (article.isDraft()) {
return [];
}
return ['readArticle'];
},
fields: (t) => ({
title: t.exposeString('title', {
authScopes: {
$granted: 'readArticle',
},
}),
content: t.exposeString('content', {
authScopes: {
$granted: 'readArticle',
},
}),
viewCount: t.exposeInt('viewCount', {
authScopes: {
$granted: 'author',
},
}),
}),
});
```
In the above example, `title`, `content`, and `viewCount` each use `$granted` scopes. In this case,
rather than scopes being granted by the parent field, they are granted by the Article type
itself. This allows the access to each field to change based on some dynamic conditions (if the
request is from the author, and if the article is a draft) without having to duplicate that logic
in each individual field.
### Interfaces
Interfaces can define auth scopes on their fields the same way objects do. Fields for a type will
run checks for each interface it implements separately, meaning that a request would need to satisfy
the scope requirements for each interface separately before the field is resolved.
Object types can set `skipInterfaceScopes` to `true` to skip interface checks when resolving fields
for that Object type.
### Cache keys
Auth scopes by default are cached based on the identity of the scope parameter. This works great for
statically defined scopes, and scopes that take primitive values as their parameters. If you define
auth scopes that take complex objects, and create those objects in a scope function (based on
arguments, or parent values) You won't get cache hits on those checks.
To work around this, you can provide a `cacheKey` option to the builder for generating a cache key
from your scope checks.
```typescript
const builder = new SchemaBuilder<{
Context: Context;
AuthScopes: {
loggedIn: boolean;
};
}>({
scopeAuth: {
cacheKey: (val) => JSON.stringify(val),
authScopes: async (context) => ({
loggedIn: !!context.User,
}),
},
plugins: [ScopeAuthPlugin],
});
```
Above we are using `JSON.stringify` to generate a key. This will work for most complex objects, but
you may want to consider something like `faster-stable-stringify` that can handle circular
references, and swill always produce the same output regardless of the order of properties.
## When checks are run, and how things are cached
### Scope Initializer
The scope initializer would be run once the first time a field protected by auth scopes is resolved,
its result will be cached for the current request.
### authScopes functions on fields
When using a function for `authScopes` on a field, the function will be run each time the field is
resolved, since it has access to all the arguments passed to the resolver
### authScopes functions on types
When using a function for `authScopes` on a type, the function will be run the once for each
instance of that type in the response. It will be run lazily when the first field for that object is
resolved, and its result will be cached and reused by all fields for that instance of the type.
### scope loaders
Scope loaders will be run run whenever a field requires the corresponding scope with a unique
parameter. The scope loader results are cached per request based on a combination of the name of the
scope, and its parameter.
### grantScope on field
`grantScopes` on a field will run after the field is resolved, and is not cached
### grantScope on type
`grantScopes` on a type (object or interface) will run when the first field on the type is
resolved. It's result will be cached and reused for each field of the same instance of the type.
## API
### Types
* `AuthScopes`: `extends {}`. Each property is the name of its scope, each value is the type for the
scopes parameter.
* `ScopeLoaderMap`: Object who's keys are scope names (from `AuthScopes`) and whos values are
either booleans (indicating whether or not the request has the scope) or function that take a
parameter (type from `AuthScope`) and return `MaybePromise`
* `ScopeMap`: A map of scope names to parameters. Based on `AuthScopes`, may also contain `$all`,
`$any` or `$granted`.
### Builder
* `authScopes`: (context: Types\['Context']) => `MaybePromise>`
### Object and Interface options
* `authScopes`: `ScopeMap` or `function`, accepts `parent` and `context` returns
`MaybePromise`
* `grantScopes`: `function`, accepts `parent` and `context` returns `MaybePromise`
### Field Options
* `authScopes`: `ScopeMap` or `function`, accepts same arguments as resolver, returns
`MaybePromise`
* `grantScopes`: `string[]` or `function`, accepts same arguments as resolver, returns
`MaybePromise`
* `skipTypeScopes`: `boolean`
* `skipInterfaceScopes`: `boolean`
### toSchema options
* `disableScopeAuth`: disable the scope auth plugin. Useful for testing.
# Simple objects plugin
URL: /docs/plugins/simple-objects
Simple objects plugin docs for Pothos
***
title: Simple objects plugin
description: Simple objects plugin docs for Pothos
--------------------------------------------------
The Simple Objects Plugin provides a way to define objects and interfaces without defining type
definitions for those objects, while still getting full type safety.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-simple-objects
```
### Setup
```typescript
import SimpleObjectsPlugin from '@pothos/plugin-simple-objects';
const builder = new SchemaBuilder({
plugins: [SimpleObjectsPlugin],
});
```
### Example
```typescript
import SchemaBuilder from '@pothos/core';
import SimpleObjectsPlugin from '@pothos/plugin-simple-objects';
const builder = new SchemaBuilder({
plugins: [SimpleObjectsPlugin],
});
const ContactInfo = builder.simpleObject('ContactInfo', {
fields: (t) => ({
email: t.string({
nullable: false,
}),
phoneNumber: t.string({
nullable: true,
}),
}),
});
const Node = builder.simpleInterface('Node', {
fields: (t) => ({
id: t.id({
nullable: false,
}),
}),
});
const UserType = builder.simpleObject(
'User',
{
interfaces: [Node],
fields: (t) => ({
firstName: t.string(),
lastName: t.string(),
contactInfo: t.field({
type: ContactInfo,
nullable: false,
}),
}),
},
// You can add additional fields with resolvers with a third fields argument
(t) => ({
fullName: t.string({
resolve: (user) => `${user.firstName} ${user.lastName}`,
}),
}),
);
builder.queryType({
fields: (t) => ({
user: t.field({
type: UserType,
args: {
id: t.arg.id({ required: true }),
},
resolve: (parent, args, { User }) => {
return {
id: '1003',
firstName: 'Leia',
lastName: 'Organa',
contactInfo: {
email: 'leia@example.com',
phoneNumber: null,
},
};
},
}),
}),
});
```
## Extending simple objects
In some cases, you may want to add more complex fields with resolvers or args where the value isn't
just passed down from the parent.
In these cases, you can either add the field in the 3rd arg (fields) as shown above, or you can add
additional fields to the type using methods like `builder.objectType`:
```typescript
builder.objectType(UserType, (t) => ({
fullName: t.string({
resolve: (user) => `${user.firstName} ${user.lastName}`,
}),
}));
```
## Limitations
When using simpleObjects in combination with other plugins like authorization, those plugins may use
`unknown` as the parent type in some custom fields (eg. `parent` of a permission check function on
a field).
# Smart subscriptions plugin
URL: /docs/plugins/smart-subscriptions
Smart subscriptions plugin docs for Pothos
***
title: Smart subscriptions plugin
description: Smart subscriptions plugin docs for Pothos
-------------------------------------------------------
This plugin provides a way of turning queries into GraphQL subscriptions. Each field, Object, and
Interface in a schema can define subscriptions to be registered when that field or type is used in a
smart subscription.
The basic flow of a smart subscription is:
1. Run the query the smart subscription is based on and push the initial result of that query to the
subscription
2. As the query is resolved, register any subscriptions defined on fields or types that where used
in the query
3. When any of the subscriptions are triggered, re-execute the query and push the updated data to
the subscription.
There are additional options which will allow only the sub-tree of a field/type that triggered a
fetch to re-resolved.
This pattern makes it easy to define subscriptions without having to worry about what parts of your
schema are accessible via the subscribe query, since any type or field can register a subscription.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-smart-subscriptions
```
### Setup
```typescript
import SchemaBuilder from '@pothos/core';
import SmartSubscriptionsPlugin from '@pothos/plugin-smart-subscriptions';
const builder = new SchemaBuilder({
plugins: [SmartSubscriptionsPlugin],
smartSubscriptions: {
debounceDelay: number | null;
subscribe: (
name: string,
context: Context,
cb: (err: unknown, data?: unknown) => void,
) => Promise | void;
unsubscribe: (name: string, context: Context) => Promise | void;
},
});
```
#### Helper for usage with async iterators
```typescript
const builder = new SchemaBuilder({
smartSubscriptions: {
...subscribeOptionsFromIterator((name, { pubsub }) => {
return pubsub.asyncIterableIterator(name);
}),
},
});
```
### Creating a smart subscription
```typescript
builder.queryFields((t) => ({
polls: t.field({
type: ['Poll'],
smartSubscription: true,
subscribe: (subscriptions, root, args, ctx, info) => {
subscriptions.register('poll-added')
subscriptions.register('poll-deleted')
},
resolve: (root, args, ctx, info) => {
return ctx.getThings();
},
}),
})
```
Adding `smartSubscription: true` to a query field creates a field of the same name on the
`Subscriptions` type. The `subscribe` option is optional, and shows how a field can register a
subscription.
This would be queried as:
```graphql
subscription {
polls {
question
answers {
id
value
}
}
}
```
### registering subscriptions for objects
```typescript
builder.objectType('Poll', {
subscribe: (subscriptions, poll, context) => {
subscriptions.register(`poll/${poll.id}`)
},
fields: (t) => ({
question: t.exposeString('question', {}),
answers: t.field({...}),
}),
});
```
This will create a new subscription for every `Poll` that is returned in the subscription. When the
query is updated to fetch a new set of results because a subscription event fired, the subscribe
call will be called again for each poll in the new result set.
#### more options
```typescript
builder.objectType('Poll', {
subscribe: (subscriptions, poll, context) => {
subscriptions.register(`poll/${poll.id}`, {
filter: (value) => true | false,
invalidateCache: (value) => context.PollCache.remove(poll.id),
refetch: (): => context.Polls.fetchByID(poll.id)!),
});
},
fields: (t) => ({
...
}),
});
```
Passing a `filter` function will filter the events, any only cause a re-fetch if it returns true.
`invalidateCache` is called before refetching data, to allow any cache invalidation to happen so
that when the new data is loaded, results are not stale.
`refetch` enables directly refetching the current object. When refetch is provided and a
subscription event fires for the current object, or any of its children, other parts of the query
that are not dependents of this object will no be refetched.
### registering subscriptions for fields
```typescript
builder.objectType('Poll', {
fields: (t) => ({
question: t.exposeString('question', {}),
answers: t.field({
type: ['Answer'],
subscribe: (subscriptions, poll) => subscriptions.register(`poll-answers/${poll.id}`),
resolve: (parent, args, context, info) => {
return parent.answers;
},
}),
}),
});
```
#### more options for fields
```typescript
builder.objectType('Poll', {
fields: (t) => ({
question: t.exposeString('question', {}),
answers: t.field({
type: ['Answer'],
canRefetch: true,
subscribe: (subscriptions, poll) =>
subscriptions.register(`poll-answers/${poll.id}`, {
filter: (value) => true | false,
invalidateCache: (value) => context.PollCache.remove(poll.id),
}),
resolve: (parent, args, context, info) => {
return parent.answers;
},
}),
}),
});
```
Similar to subscriptions on objects, fields can pass `filter` and `invalidateCache` functions when
registering a subscription. Rather than passing a `refetch` function, you can set `canRefetch` to
`true` in the field options. This will re-run the current resolve function to update it (and it's
children) without having to re-run the rest of the query.
### Known limitations
* Currently value passed to `filter` and `invalidateCache` is typed as `unknown`. This should be
improved in the future.
* Does not work with list fields implemented with async-generators (used for `@stream` queries)
# SubGraph plugin
URL: /docs/plugins/sub-graph
SubGraph plugin docs for Pothos
***
title: SubGraph plugin
description: SubGraph plugin docs for Pothos
--------------------------------------------
A plugin for creating sub-selections of your graph. This allows you to use the same code/types for
multiple variants of your API.
One common use case for this is to share implementations between your public and internal APIs, by
only exposing a subset of your graph publicly.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-sub-graph
```
### Setup
```typescript
import SubGraphPlugin from '@pothos/plugin-sub-graph';
const builder = new SchemaBuilder<{
SubGraphs: 'Public' | 'Internal';
}>({
plugins: [SubGraphPlugin],
subGraphs: {
defaultForTypes: [],
fieldsInheritFromTypes: true,
},
});
//in another file:
const schema = builder.toSchema();
const publicSchema = builder.toSchema({ subGraph: 'Public' });
const internalSchema = builder.toSchema({ subGraph: 'Internal' });
// You can also build a graph containing multiple subgraphs:
const combinedSchema = builder.toSchema({ subGraph: ['Internal', 'Public'] });
// Or create a graph of the intersection between multiple subgraphs:
const allSchema = builder.toSchema({ subGraph: { all: ['Internal', 'Public'] } });
```
### Options on Types
* `subGraphs`: An optional array of sub-graph the type should be included in.
### Object and Interface types:
* `defaultSubGraphsForFields`: Default sub-graph for fields of the type to be included in.
## Options on Fields
* `subGraphs`: An optional array of sub-graph the field to be included in. If not provided, will
fallback to:
* `defaultForFields` if set on type
* `subGraphs` of the type if `subGraphs.fieldsInheritFromTypes` was set in the builder
* an empty array
### Options on Builder
* `subGraphs.defaultForTypes`: Specifies what sub-graph a type is part of by default.
* `subGraphs.fieldsInheritFromTypes`: defaults to `false`. When true, fields on a type will default
to being part of the same sub-graph as their parent type. Only applies when type does not have
`defaultForFields` set.
### Usage
```typescript
builder.queryType({
// Query type will be available in default, Public, and Internal schemas
subGraphs: ['Public', 'Internal'],
// Fields on the Query object will now default to not being a part of any subgraph
defaultForFields: [];
fields: (t) => ({
someField: t.string({
// someField will be in the default schema and "Internal" sub graph, but
// not present in the Public sub graph
subGraphs: ['Internal']
resolve: () => {
throw new Error('Not implemented');
},
}),
}),
});
```
### Missing types
When creating a sub-graph, the plugin will only copy in types that are included in the sub-graph,
either by explicitly setting it on the type, or because the sub-graph is included in the default
list. Like types, output fields that are not included in a sub-graph will also be omitted. Arguments
and fields on Input types can not be removed because that would break assumptions about argument
types in resolvers.
If a type that is not included in the sub-graph is referenced by another part of the graph that is
included in the graph, a runtime error will be thrown when the sub graph is constructed. This can
happen in a number of cases including cases where a removed type is used in the interfaces of an
object, a member of a union, or the type of a field argument.
### Explicitly including types
You can use the `explicitlyIncludeType` option to explicitly include types in a sub-graph that are
unreachable. This isn't normally required, but there are some edge cases where this may be useful.
For instance, when extending external references with the federation plugin, the externalRef may
not be reachable directly through your schema, but you may still want to include it when building the
schema. To work around this, we can explicitly include any any types that have a `key` directive:
```typescript
import FederationPlugin, { hasResolvableKey } from '@pothos/plugin-federation';
import SubGraphPlugin from '@pothos/plugin-sub-graph';
const builder = new SchemaBuilder<{
SubGraphs: 'Public' | 'Internal';
}>({
plugins: [SubGraphPlugin, FederationPlugin],
subGraphs: {
explicitlyIncludeType: (type, subGraphs) => hasResolvableKey(type)
},
});
```
# Tracing plugin
URL: /docs/plugins/tracing
A Pothos plugin for tracing and logging resolver invocations
***
title: Tracing plugin
description: A Pothos plugin for tracing and logging resolver invocations
-------------------------------------------------------------------------
This plugin adds hooks for tracing and logging resolver invocations. It also comes with a few
additional packages for integrating with various tracing providers including opentelemetry, New
Relic and Sentry.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-tracing
```
### Setup
```typescript
import TracingPlugin, { wrapResolver, isRootField } from '@pothos/plugin-tracing';
const builder = new SchemaBuilder({
plugins: [TracingPlugin],
tracing: {
// Enable tracing for rootFields by default, other fields need to opt in
default: (config) => isRootField(config),
// Log resolver execution duration
wrap: (resolver, options, config) =>
wrapResolver(resolver, (error, duration) => {
console.log(`Executed resolver ${config.parentType}.${config.name} in ${duration}ms`);
}),
},
});
```
### Overview
The Tracing plugin is designed to have very limited overhead, and uses a modular approach to cover a
wide variety of use cases.
The tracing plugin comes with a number of utility functions for implementing common patterns, and a
couple of provider specific modules that can be installed separately (described in more detail
below).
The primary interface to the tracing plugin consists of 3 parts:
1. A new `tracing` option is added to each field, for enabling or configuring tracing for that field
2. The `tracing.default` which is used as a fallback for any field that does not explicitly set its
`tracing` options.
3. The `tracing.wrap` function, which takes a resolver, the tracing option for a field, and a field
configuration object, and should return a wrapped/traced version of the resolver.
### Enabling tracing for a field
Enabling tracing on a field is as simple as setting the tracing option to `true`
```ts
builder.queryType({
fields: (t) => ({
hello: t.string({
args: { name: t.arg.string() },
// enable tracing
tracing: true,
resolve: (parent, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
```
#### Custom tracing options
For more advanced tracing setups, you may want to allow fields to provide additional tracing
options. You can do this by customizing the `Tracing` generic in the builder.
```ts
import TracingPlugin, { wrapResolver, isRootField } from '@pothos/plugin-tracing';
export const builder = new SchemaBuilder<{
// the `tracing` option can now be a boolean, or an object with a formatMessage function
Tracing: boolean | { formatMessage: (duration: number) => string };
}>({
plugins: [TracingPlugin],
tracing: {
// Using custom options in your tracer will be described below
...
},
});
builder.queryType({
fields: (t) => ({
hello: t.string({
args: { name: t.arg.string() },
// We can now use custom options when configuring tracing
tracing: { formatMessage: (duration) => `It took ${duration}ms to say hello` },
resolve: (parent, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
```
### Enabling tracing by default
In most applications you won't want to configure tracing for each field. Instead you can use the
`tracing.default` to enable tracing for specific types of fields.
```ts
import TracingPlugin, { wrapResolver, isRootField } from '@pothos/plugin-tracing';
export const builder = new SchemaBuilder<{
Tracing: boolean | { formatMessage: (duration: number) => string };
}>({
plugins: [TracingPlugin],
tracing: {
// Here we enable tracing for root fields
default: (config) => isRootField(config)
wrap: (resolve) => resolve, // actual tracing wrappers will be described below
},
});
```
There are a number of utility functions for detecting certain types of fields. For most applications
tracing every resolver will add significant overhead with very little benefit. The following
utilities exported by the tracing plugin can be used to determine which fields should have tracing
enabled by default.
* `isRootField`: Returns true for fields of the `Query`, `Mutation`, and `Subscription` types
* `isScalarField`: Returns true for fields that return Scalars, or lists of scalars
* `isEnumField`: Returns true for fields that return an Enum or list of Enums
* `isExposedField`: Returns true for fields defined with the `t.expose*` field builder methods, or
fields that use the `defaultFieldResolver`.
### Implementing a tracer
Tracers work by wrapping the execution of resolver calls. The `tracing.wrap` function keeps this
process as minimal as possible by simply providing the resolver for a field, and expecting a wrapped
version of the resolver to be returned. Resolvers can throw errors or return promises, and correctly
handling these edge cases can be a little complicated so the tracing plugin also comes with some
helpers utilities to simplify this process.
`tracing.wrap` takes 3 arguments:
1. `resolver`: the resolver for a field
2. `options`: the tracing options for the field (set either on the field, or returned by
`tracing.default`).
3. `fieldConfig`: A config object that describes the field being wrapped
```ts
export const builder = new SchemaBuilder<{
Tracing: boolean | { formatMessage: (duration: number) => string };
}>({
plugins: [TracingPlugin],
tracing: {
default: (config) => isRootField(config),
wrap: (resolver, options, config) =>
wrapResolver(resolver, (error, duration) => {
const message =
typeof options === 'object'
? options.formatMessage(duration)
: `Executed resolver ${config.parentType}.${config.name} in ${duration}ms`;
console.log(message);
}),
},
});
```
The `wrapResolver` utility takes a resolver, and a `onEnd` callback, and returns a wrapped version
of the resolver that will call the callback with an error (or null) and the duration the resolver
took to complete.
The `runFunction` helper is similar, but rather than wrapping a resolver, will immediately execute a
function with no arguments. This can be useful for more complex use cases where you need access to
other resolver arguments, or want to add your own logic before the resolver begins executing.
```ts
export const builder = new SchemaBuilder<{
Tracing: boolean | { formatMessage: (duration: number) => string };
}>({
plugins: [TracingPlugin],
tracing: {
default: (config) => isRootField(config) || (!isScalarField(config) && !isEnumField(config)),
wrap: (resolver, options) => (source, args, ctx, info) => {
doSomethingFirst(args);
return runFunction(
() => resolver(source, args, ctx, info),
(error, duration) => {
console.log(
`Executed resolver for ${info.parentType}.${info.fieldName} in ${duration}ms`,
);
},
);
},
},
});
```
### Using resolver arguments in tracers
When defining tracing options for a field, you may want to pass some resolver args to your tracing
logic.
The follow example shows how arguments might be passed to a tracer to be attached to a span:
```ts
// Create a simple tracer that creates spans, and adds custom attributes if they are provided
export const builder = new SchemaBuilder<{
Tracing: false | { attributes?: Record };
}>({
plugins: [TracingPlugin],
tracing: {
default: (config) => {
if (isRootField(config)) {
return {};
}
return false;
},
// The `tracing` options are passed as the second argument for wrap
wrap: (resolver, options, fieldConfig) => (source, args, ctx, info) => {
const span = tracer.createSpan();
if (options.attributes) {
span.setAttributes();
}
return runFunction(
() => resolver(source, args, ctx, info),
() => {
span.end();
},
);
},
},
});
builder.queryType({
fields: (t) => ({
hello: t.string({
args: { name: t.arg.string() },
// Pass this fields args as a custom attribute
tracing: (root, args) => ({ attributes: { args } }),
resolve: (root, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
```
The `default` option can also return a function to access resolver arguments:
```ts
// Create a simple tracer that creates spans, and adds custom attributes if they are provided
export const builder = new SchemaBuilder<{
Tracing: false | { attributes?: Record };
}>({
plugins: [TracingPlugin],
tracing: {
default: (config) => {
if (isRootField(config)) {
// For all root fields, add arguments as a custom attribute
return (root, args) => ({ attributes: { args }});
}
// disable tracing for exposed fields
if (isExposedField(config)) {
return false
}
// Enable tracing, but don't add any attributes
return {}
},
wrap: ...,
});
```
It is important to know that if a field uses a function to return its tracing option (either
directly on the field definition, or as a default) the behavior of the `wrap` function changes
slightly.
By default `wrap` is called for each field when the schema is built. For fields that return their
tracing option via a function, wrap will be called whenever the field is executed because the
tracing options are dependent on the resolver arguments.
For many uses cases this does not add a lot of overhead, but as a rule of thumb, it is always more
efficient to use tracing options that don't depend on the resolver value.
The above example could be re-designed slightly to improve tracing performance:
```ts
// Create a simple tracer that creates spans, and adds custom attributes if they are provided
export const builder = new SchemaBuilder<{
Tracing: false | { includeArgs?: boolean };
}>({
plugins: [TracingPlugin],
tracing: {
default: (config) => {
if (isRootField(config)) {
// For all root fields, add arguments as a custom attribute
return { includeArgs: true }
}
return false
},
// Wrap is now only called once for each field at build time
// since we don't depend on args to generate the tracing options
wrap: (resolver, options, fieldConfig) => (source, args, ctx, info) => {
const span = tracer.createSpan();
if (options.includeArgs) {
span.setAttributes({ args });
}
return runFunction(
() => resolver(source, args, ctx, info),
() => {
span.end();
},
);
},,
});
```
## Tracing integrations
### Opentelemetry
#### install
```package-install
npm install --save @pothos/tracing-opentelemetry @opentelemetry/semantic-conventions @opentelemetry/api
```
#### Basic usage
```ts
import SchemaBuilder from '@pothos/core';
import TracingPlugin, { isRootField } from '@pothos/plugin-tracing';
import { createOpenTelemetryWrapper } from '@pothos/tracing-opentelemetry';
import { tracer } from './tracer';
const createSpan = createOpenTelemetryWrapper(tracer, {
includeSource: true,
});
export const builder = new SchemaBuilder({
plugins: [TracingPlugin],
tracing: {
default: (config) => isRootField(config),
wrap: (resolver, options) => createSpan(resolver, options),
},
});
```
#### options
* `includeArgs`: default: `false`
* `includeSource`: default: `false`
* `ignoreError`: default: `false`
* `onSpan`: `(span, tracingOptions, parent, args, context, info) => void`
#### Adding custom attributes to spans
```ts
import { AttributeValue } from '@opentelemetry/api';
import SchemaBuilder from '@pothos/core';
import TracingPlugin, { isRootField } from '@pothos/plugin-tracing';
import { createOpenTelemetryWrapper } from '@pothos/tracing-opentelemetry';
import { tracer } from './tracer';
type TracingOptions = boolean | { attributes?: Record };
const createSpan = createOpenTelemetryWrapper(tracer, {
includeSource: true,
onSpan: (span, options) => {
if (typeof options === 'object' && options.attributes) {
span.setAttributes(options.attributes);
}
},
});
export const builder = new SchemaBuilder<{
Tracing: TracingOptions;
}>({
plugins: [TracingPlugin],
tracing: {
default: (config) => isRootField(config),
wrap: (resolver, options) => createSpan(resolver, options),
},
});
builder.queryType({
fields: (t) => ({
hello: t.string({
args: { name: t.arg.string() },
tracing: (parent, { name }) => ({ attributes: { name } }),
resolve: (parent, { name }) => `hello, ${name || 'World'}`,
}),
}),
});
```
#### Instrumenting the execution phase
The tracing plugin for Pothos only adds spans for resolvers. You may also want to capture additional
information about other parts of the graphql execution process.
This example uses GraphQL Yoga, by providing a custom envelop plugin that wraps the execution phase.
Many graphql server implementations have ways to wrap or replace the execution call, but will look
slightly different.
```ts
import { tracer } from './tracer'; // Tracer should be imported first if it handles additional instrumentation
import { print } from 'graphql';
import { createYoga, Plugin } from 'graphql-yoga';
import { createServer } from 'node:http';
import { AttributeNames, SpanNames } from '@pothos/tracing-opentelemetry';
import { schema } from './schema';
const tracingPlugin: Plugin = {
onExecute: ({ setExecuteFn, executeFn }) => {
setExecuteFn((options) =>
tracer.startActiveSpan(
SpanNames.EXECUTE,
{
attributes: {
[AttributeNames.OPERATION_NAME]: options.operationName ?? undefined,
[AttributeNames.SOURCE]: print(options.document),
},
},
async (span) => {
try {
const result = await executeFn(options);
return result;
} catch (error) {
span.recordException(error as Error);
throw error;
} finally {
span.end();
}
},
),
);
},
};
const yoga = createYoga({
schema,
plugins: [tracingPlugin],
});
const server = createServer(yoga);
```
Envelop also provides its own opentelemetry plugin which can be used instead of a custom plugin like
the one shown above. The biggest drawback to this is the current version of `@envelop/opentelemetry`
does not track the parent/child relations of spans it creates.
```ts
import { provider } from './tracer'; // Tracer should be imported first if it handles additional instrumentation
import { useOpenTelemetry } from '@envelop/opentelemetry';
import { createYoga } from 'graphql-yoga';
import { createServer } from 'node:http';
import { schema } from './schema';
const yoga = createYoga({
schema,
plugins: [
useOpenTelemetry(
{
// Disabling envelops resolver tracing is important to avoid duplicate spans
resolvers: false,
variables: false,
result: false,
},
provider,
),
],
});
const server = createServer(yoga);
```
#### Setting up a tracer
The following setup creates a very simple opentelemetry tracer that will log spans to the console.
Real applications will need to define exporters that match the opentelemetry backend you are using.
```ts
import { diag, DiagConsoleLogger, DiagLogLevel, trace } from '@opentelemetry/api';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { HttpInstrumentation } from '@opentelemetry/instrumentation-http';
import { ConsoleSpanExporter, SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
export const provider = new NodeTracerProvider({
spanProcessors: [new SimpleSpanProcessor(new ConsoleSpanExporter())]
});
provider.register();
registerInstrumentations({
// Automatically create spans for http requests
instrumentations: [new HttpInstrumentation({})],
});
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.INFO);
export const tracer = trace.getTracer('graphql');
```
### Datadog
Datadog supports opentelemetry. To report traces to datadog, you will need to instrument your
application with an opentelemetry tracer, and configure your datadog agent to collect open telemetry
traces.
#### Creating a tracer that exports to datadog
```ts
import { trace } from '@opentelemetry/api';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { HttpInstrumentation } from '@opentelemetry/instrumentation-http';
import { resourceFromAttributes } from '@opentelemetry/resources';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-base';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
export const provider = new NodeTracerProvider({
resource: resourceFromAttributes({
[ATTR_SERVICE_NAME]: 'Pothos-OTEL-example',
}),
});
provider.addSpanProcessor(
new SimpleSpanProcessor(
new OTLPTraceExporter({
// optionally set the opentelemetry collector endpoint if you are not using the default port
// url: 'http://host:port',
}),
),
);
provider.register();
registerInstrumentations({
instrumentations: [new HttpInstrumentation({})],
});
export const tracer = trace.getTracer('graphql');
```
#### Configuring the datadog agent to collect open telemetry
Add the following to your datadog agent configuration
```yaml
otlp_config:
receiver:
protocols:
http:
endpoint: 0.0.0.0:4318
```
### New Relic
#### install
```package-install
npm install --save @pothos/tracing-newrelic newrelic @types/newrelic
```
#### Basic usage
```ts
import SchemaBuilder from '@pothos/core';
import TracingPlugin, { isRootField } from '@pothos/plugin-tracing';
import { createNewrelicWrapper } from '@pothos/tracing-newrelic';
const wrapResolver = createNewrelicWrapper({
includeArgs: true,
includeSource: true,
});
export const builder = new SchemaBuilder({
plugins: [TracingPlugin],
tracing: {
default: (config) => isRootField(config),
wrap: (resolver) => wrapResolver(resolver),
},
});
```
#### options
* `includeArgs`: default: `false`
* `includeSource`: default: `false`
#### Instrumenting the execution phase
The tracing plugin for Pothos only adds spans for resolvers. You may also want to capture additional
information about other parts of the graphql execution process.
This example uses GraphQL Yoga, by providing a custom envelop plugin that wraps the execution phase.
Many graphql server implementations have ways to wrap or replace the execution call, but will look
slightly different.
```ts
import newrelic from 'newrelic'; // newrelic must be imported first
import { print } from 'graphql';
import { createYoga, Plugin } from 'graphql-yoga';
import { createServer } from 'node:http';
import { AttributeNames } from '@pothos/tracing-newrelic';
import { schema } from './schema';
const tracingPlugin: Plugin = {
onExecute: ({ args }) => {
newrelic.addCustomAttributes({
[AttributeNames.OPERATION_NAME]: args.operationName ?? '',
[AttributeNames.SOURCE]: print(args.document),
});
},
};
const yoga = createYoga({
schema,
plugins: [tracingPlugin],
});
const server = createServer(yoga);
```
### Using the envelop newrelic plugin
Envelop has it's own plugin for newrelic that can be combined with the tracing plugin:
```ts
import { useNewRelic } from '@envelop/newrelic';
import { createYoga } from 'graphql-yoga';
import { createServer } from 'node:http';
import { schema } from './schema';
const yoga = createYoga({
schema,
plugins: [
useNewRelic({
// Disable resolver tracking since this is covered by the pothos tracing plugin
// If all resolvers are being traced, you could use the New Relic envelop plug instead of the pothos tracing plugin
trackResolvers: false,
}),
],
});
const server = createServer(yoga);
```
### Sentry
#### install
```package-install
npm install --save @pothos/tracing-sentry @sentry/node
```
#### Basic usage
```ts
import SchemaBuilder from '@pothos/core';
import TracingPlugin, { isRootField } from '@pothos/plugin-tracing';
import { createSentryWrapper } from '@pothos/tracing-sentry';
const traceResolver = createSentryWrapper({
includeArgs: true,
includeSource: true,
});
export const builder = new SchemaBuilder({
plugins: [TracingPlugin],
tracing: {
default: (config) => isRootField(config),
wrap: (resolver, options) => traceResolver(resolver, options),
},
});
```
#### options
* `includeArgs`: default: `false`
* `includeSource`: default: `false`
* `ignoreError`: default: `false`
#### Instrumenting the execution phase
The tracing plugin for Pothos only adds spans for resolvers. You may also want to capture additional
information about other parts of the graphql execution process.
This example uses GraphQL Yoga, by providing a custom envelop plugin that wraps the execution phase.
Many graphql server implementations have ways to wrap or replace the execution call, but will look
slightly different.
```ts
import { print } from 'graphql';
import { createYoga, Plugin } from 'graphql-yoga';
import { createServer } from 'node:http';
import { AttributeNames } from '@pothos/tracing-sentry';
import * as Sentry from '@sentry/node';
import { schema } from './schema';
Sentry.init({
dsn: process.env.SENTRY_DSN,
tracesSampleRate: 1,
});
const tracingPlugin: Plugin = {
onExecute: ({ setExecuteFn, executeFn }) => {
setExecuteFn((options) =>
Sentry.startSpan(
{
op: 'graphql.execute',
name: options.operationName ?? '',
forceTransaction: true,
attributes: {
[AttributeNames.OPERATION_NAME]: options.operationName ?? undefined,
[AttributeNames.SOURCE]: print(options.document),
},
},
() => executeFn(options),
),
);
},
};
const yoga = createYoga({
schema,
plugins: [tracingPlugin],
});
const server = createServer(yoga);
```
### Using the envelop sentry plugin
Envelop has it's own plugin for Sentry that can be combined with the tracing plugin:
```ts
import { useSentry } from '@envelop/sentry';
import { createYoga } from 'graphql-yoga';
import { createServer } from 'node:http';
import { schema } from './schema';
const yoga = createYoga({
schema,
plugins: [useSentry({})],
});
const server = createServer(yoga);
```
### AWS XRay
#### install
```package-install
npm install --save @pothos/tracing-xray aws-xray-sdk-core
```
#### Basic usage
```ts
import SchemaBuilder from '@pothos/core';
import TracingPlugin, { isEnumField, isRootField, isScalarField } from '@pothos/plugin-tracing';
import { createXRayWrapper } from '@pothos/tracing-xray';
const traceResolver = createXRayWrapper({
includeArgs: true,
includeSource: true,
});
export const builder = new SchemaBuilder({
plugins: [TracingPlugin],
tracing: {
default: (config) => isRootField(config) || (!isScalarField(config) && !isEnumField(config)),
wrap: (resolver, options) => traceResolver(resolver, options),
},
});
```
#### options
* `includeArgs`: default: `false`
* `includeSource`: default: `false`
#### Instrumenting the execution phase
The tracing plugin for Pothos only adds spans for resolvers. You may also want to capture additional
information about other parts of the graphql execution process.
This example uses GraphQL Yoga, by providing a custom envelop plugin that wraps the execution phase.
Many graphql server implementations have ways to wrap or replace the execution call, but will look
slightly different.
```ts
import AWSXRay from 'aws-xray-sdk-core';
import { print } from 'graphql';
import { createYoga, Plugin } from 'graphql-yoga';
import { createServer } from 'node:http';
import { AttributeNames, SpanNames } from '@pothos/tracing-xray';
import { schema } from './schema';
const tracingPlugin: Plugin = {
onExecute: ({ setExecuteFn, executeFn }) => {
setExecuteFn(async (options) => {
const parent = new AWSXRay.Segment('parent');
return AWSXRay.getNamespace().runAndReturn(() => {
AWSXRay.setSegment(parent);
return AWSXRay.captureAsyncFunc(
SpanNames.EXECUTE,
(segment) => {
if (segment) {
segment.addAttribute(
AttributeNames.OPERATION_NAME,
options.operationName ?? '',
);
segment.addAttribute(AttributeNames.SOURCE, print(options.document));
}
return executeFn(options);
},
parent,
);
});
});
},
};
const yoga = createYoga({
schema,
plugins: [tracingPlugin],
});
const server = createServer(yoga);
```
# Validation plugin
URL: /docs/plugins/validation
Validation plugin docs for Pothos
***
title: Validation plugin
description: Validation plugin docs for Pothos
----------------------------------------------
A plugin for adding validation to field arguments, input object fields, and input types using modern validation libraries like [Zod](https://github.com/colinhacks/zod), [Valibot](https://valibot.dev), and [ArkType](https://arktype.io).
This plugin provides a library-agnostic approach to validation by supporting any validation library that implements the [standard schema](https://standardschema.dev) interface, making it flexible and future-proof.
## Usage
### Install
To use the validation plugin, you'll need to install the validation plugin and a compatible validation library:
```package-install
npm install --save @pothos/plugin-validation zod
# OR
npm install --save @pothos/plugin-validation valibot
# OR
npm install --save @pothos/plugin-validation arktype
```
### Setup
```typescript
import ValidationPlugin from '@pothos/plugin-validation';
import { z } from 'zod'; // or your preferred validation library
const builder = new SchemaBuilder({
plugins: [ValidationPlugin],
});
builder.queryType({
fields: (t) => ({
simple: t.boolean({
args: {
// Validate individual arguments
email: t.arg.string({
validate: z.string().email(),
}),
},
resolve: () => true,
}),
}),
});
```
## Validation API Overview
The validation plugin supports validating inputs and arguments in several different ways:
* **Argument validation**: `t.arg.string({ validate: schema })` or `t.arg.string().validate(schema)` - Validate individual arguments
* **Validate all field args**: `t.field({ args, validate: schema, ... })` or `t.field({ args: t.validate(args), ... })` - Validate all arguments together
* **Input type validation**: `builder.inputType({ validate: schema, ... })` or `builder.inputType({ ... }).validate(schema)` - Validate entire input objects
* **Input field validation**: `t.string({ validate: schema })` or `t.string().validate(schema)` - Validate individual input type fields
## Validation Patterns
### Argument Validation
Validate each field argument independently using either the object syntax or chaining API:
```typescript
builder.queryType({
fields: (t) => ({
user: t.string({
args: {
email: t.arg.string({
validate: z.string().email(),
}),
name: t.arg.string()
.validate(z.string().min(2).max(50)),
},
resolve: (_, args) => `User: ${args.name}`,
}),
}),
});
```
#### Data Transformation with Argument Validation
When using the chaining API, you can transform data as part of the validation process:
```typescript
builder.queryType({
fields: (t) => ({
processData: t.string({
args: {
// Convert comma-separated string to array
tags: t.arg.string()
.validate(z.string().transform(str => str.split(',').map(s => s.trim()))),
},
resolve: (_, args) => {
return `Processed ${args.tags.length} tags`;
},
}),
}),
});
```
### Validating all Field Arguments Together
You can validate all arguments of a field together by passing a validation schema to the `t.field`
```typescript
builder.queryType({
fields: (t) => ({
contact: t.boolean({
args: {
email: t.arg.string(),
phone: t.arg.string(),
},
// Ensure at least one contact method is provided
validate: z
.object({
email: z.string().optional(),
phone: z.string().optional(),
})
.refine(
(args) => !!args.phone || !!args.email,
{ message: 'Must provide either phone or email' }
),
resolve: () => true,
}),
}),
});
```
#### With transforms
To transform all arguments together, you will need to use t.validate(args):
```typescript
builder.queryType({
fields: (t) => ({
user: t.string({
args: t.validate({
email: t.arg.string(),
phone: t.arg.string(),
},
z.object({
email: z.string().optional(),
phone: z.string().optional(),
})
.refine(
(args) => !!args.phone || !!args.email,
{ message: 'Must provide either phone or email' }
)
.transform((args) => ({
filter: {
email: args.email ? args.email.toLowerCase() : undefined,
phone: args.phone ? args.phone.replace(/\D/g, '') : undefined,
},
}))
),
resolve: (_, args) => {
// args has transformed shape:
// { filter: { email?: string, phone?: string } }
return `User filter: ${JSON.stringify(args.filter)}`;
},
}),
}),
});
```
### Input Type Validation
Validate entire input objects with complex validation logic using either object syntax or chaining:
```typescript
// Object syntax
const UserInput = builder.inputType('UserInput', {
fields: (t) => ({
name: t.string(),
age: t.int(),
}),
validate: z
.object({
name: z.string(),
age: z.number(),
})
.refine((user) => user.name !== 'admin', {
message: 'Username "admin" is not allowed',
})
});
```
#### Input Type Transformation
Transform entire input types:
```typescript
const UserInput = builder.inputType('RawUserInput', {
fields: (t) => ({
fullName: t.string(),
birthYear: t.string(),
}),
}).validate(
z.object({
fullName: z.string(),
birthYear: z.string(),
}).transform(data => ({
firstName: data.fullName.split(' ')[0],
lastName: data.fullName.split(' ').slice(1).join(' '),
age: new Date().getFullYear() - parseInt(data.birthYear),
}))
);
builder.queryType({
fields: (t) => ({
createUser: t.string({
args: {
userData: t.arg({ type: UserInput }),
},
resolve: (_, args) => {
// args.userData has transformed shape:
// { firstName: string, lastName: string, age: number }
return `Created user: ${args.userData.firstName} ${args.userData.lastName}`;
},
}),
}),
});
```
### Input Field Validation
Validate individual fields within input types:
```typescript
const UserInput = builder.inputType('UserInput', {
fields: (t) => ({
name: t.string({
validate: z.string().min(2).refine(
(name) => name[0].toUpperCase() === name[0],
{ message: 'Name must be capitalized' }
),
})
}),
});
```
#### Input Field Transformation
Transform field values during validation:
```typescript
const UserInput = builder.inputType('UserInput', {
fields: (t) => ({
birthDate: t.string()
.validate(z.string().regex(/^\d{4}-\d{2}-\d{2}$/))
.validate(z.string().transform(str => new Date(str))),
}),
});
```
## Supported Validation Libraries
This plugin works with multiple validation libraries, giving you the flexibility to choose the one that best fits your needs:
* **[Zod](https://zod.dev)** - TypeScript-first schema validation with static type inference
* **[Valibot](https://valibot.dev)** - The open source schema library for TypeScript with bundle size, type safety and developer experience in mind
* **[ArkType](https://arktype.io)** - TypeScript's 1:1 validator, optimized from editor to runtime
* Any library implementing the [standard schema](https://standardschema.dev) interface
## Plugin Options
### 'validationError'
The `validationError` option allows you to customize how validation errors are handled and formatted. This is useful for:
* Customizing error messages for your application's needs
* Logging validation failures for monitoring
* Integrating with error tracking services
* Providing context-specific error messages
```typescript
const builder = new SchemaBuilder({
plugins: [ValidationPlugin],
validation: {
validationError: (validationResult, args, context) => {
// validationResult contains the standard-schema validation result
return new Error(`Validation failed: ${validationResult.issues.map(i => i.message).join(', ')}`);
},
},
});
```
#### Return Values
Your error handler can return:
* **Error object**: Return a custom Error instance
* **String**: Return a string message (will be wrapped in a PothosValidationError)
* **Throw**: Throw an error directly
### Validation Execution Order
Understanding when and how validations are executed:
1. **Input Field Validation**: Individual input fields are validated first
2. **Input Type Validation**: Whole input object validation runs after field validation passes
3. **Argument Validation**: Individual field arguments are validated
4. **Field-Level Validation**: Cross-field validation with `t.validate()` runs last
When there are multiple validations for the same field or type, they are executed in order, so that any transforms are applied before passing to the next schema.
Validations for separate fields or arguments are executed in parallel, and their results are merged into a single set of issues.
# With-Input plugin
URL: /docs/plugins/with-input
With-Input plugin docs for Pothos
***
title: With-Input plugin
description: With-Input plugin docs for Pothos
----------------------------------------------
A plugin for creating fields with a single input object. This plugin adds a new `t.fieldWithInput`
method that allows you to more easily define fields with a single input type without having to
define it separately.
## Usage
### Install
```package-install
npm install --save @pothos/plugin-with-input
```
### Setup
```typescript
import WithInputPlugin from '@pothos/plugin-with-input';
const builder = new SchemaBuilder({
plugins: [WithInputPlugin],
// optional
withInput: {
typeOptions: {
// default options for Input object types created by this plugin
},
argOptions: {
// set required: false to override default behavior
},
},
});
```
### Defining fields with inputs
```typescript
builder.queryType({
fields: (t) => ({
example: t.fieldWithInput({
input: {
// Note that this uses a new t.input field builder for defining input fields
id: t.input.id({ required: true }),
},
type: 'ID',
resolve: (root, args) => args.input.id,
}),
}),
});
```
This will produce a schema like:
```graphql
type Query {
example(input: QueryExampleInput!): ID!
}
input QueryExampleInput {
id: ID!
}
```
The input name will default to `${ParentType.name}${Field.name}Input`.
### Customizing your input object
You can customize the name of your Input object, and the name of the input argument:
```typescript
builder.queryType({
fields: (t) => ({
example: t.fieldWithInput({
typeOptions: {
name: 'CustomInputTypeName',
// Additional options for the input type can be added here
},
argOptions: {
name: 'customArgName',
// Additional options for the input argument can be added here
},
input: {
id: t.input.id({ required: true }),
},
type: 'ID',
// inputs are now under `customArgName`
resolve: (root, args) => args.customArgName.id,
}),
}),
});
```
### Changing the nullability of the input arg
You can configure the global default for input args when creating the builder by providing
`WithInputArgRequired` in the builders `SchemaTypes`, and setting `withInput.argOptions.required`.
```typescript
const builder = new SchemaBuilder<{ WithInputArgRequired: false }>({
plugins: [WithInputPlugin],
withInput: {
argOptions: {
required: false,
},
},
});
```
arg requiredness can also be set on a per field basis by setting `argOptions.required`
```typescript
builder.queryType({
fields: (t) => ({
example: t.fieldWithInput({
type: 'Boolean',
argOptions: {
required: false,
},
input: {
someInput: t.input.boolean({}),
},
resolve: (root, args) => {
return args.input?.someInput;
},
}),
});
```
### Prisma plugin integration
If you are using the prisma plugin you can use `t.prismaFieldWithInput` to add prisma fields with
input objects:
```typescript
builder.queryField('user', (t) =>
t.prismaFieldWithInput({
type: 'User',
input: {
id: t.input.id({ required: true }),
},
resolve: (query, _, args) =>
prisma.user.findUnique({
where: {
id: Number.parseInt(args.input.id, 10),
},
...query,
}),
}),
);
```
### Customizing the default naming conventions
If you want to customize how the default input type names are generated you can provide a name
callback in `withInput.typeOptions`:
```typescript
import WithInputPlugin from '@pothos/plugin-with-input';
const builder = new SchemaBuilder({
plugins: [WithInputPlugin],
withInput: {
typeOptions: {
name: ({ parentTypeName, fieldName }) => {
const capitalizedFieldName = `${fieldName[0].toUpperCase()}${fieldName.slice(1)}`;
// This will remove the default Query/Mutation prefix from the input type name
if (parentTypeName === 'Query' || parentTypeName === 'Mutation') {
return `${capitalizedFieldName}Input`;
}
return `${parentTypeName}${capitalizedFieldName}Input`;
},
},
},
});
```
# Zod Validation plugin
URL: /docs/plugins/zod
Zod plugin docs for Pothos
***
title: Zod Validation plugin
description: Zod plugin docs for Pothos
---------------------------------------
A plugin for adding validation for field arguments based on
[zod](https://github.com/colinhacks/zod). This plugin does not expose zod directly, but most of the
options map closely to the validations available in zod.
## Usage
### Install
To use the zod plugin you will need to install both `zod` package and the zod plugin:
```package-install
npm install --save zod @pothos/plugin-zod
```
### Setup
```typescript
import ZodPlugin from '@pothos/plugin-zod';
const builder = new SchemaBuilder({
plugins: [ZodPlugin],
zod: {
// optionally customize how errors are formatted
validationError: (zodError, args, context, info) => {
// the default behavior is to just throw the zod error directly
return zodError;
},
},
});
builder.queryType({
fields: (t) => ({
simple: t.boolean({
args: {
// Validate individual args
email: t.arg.string({
validate: {
email: true,
},
}),
phone: t.arg.string(),
},
// Validate all args together
validate: (args) => !!args.phone || !!args.email,
resolve: () => true,
}),
}),
});
```
## Options
`validationError`: (optional) A function that will be called when validation fails. The function
will be passed the the zod validation error, as well as the args, context and info objects. It can
throw an error, or return an error message or custom Error instance.
### Examples
#### With custom message
```typescript
builder.queryType({
fields: (t) => ({
withMessage: t.boolean({
args: {
email: t.arg.string({
validate: {
email: [true, { message: 'invalid email address' }],
},
}),
phone: t.arg.string(),
},
validate: [
(args) => !!args.phone || !!args.email,
{ message: 'Must provide either phone number or email address' },
],
resolve: () => true,
}),
}),
});
```
### Validating List
```typescript
builder.queryType({
fields: (t) => ({
list: t.boolean({
args: {
list: t.arg.stringList({
validate: {
items: {
email: true,
},
maxLength: 3,
},
}),
},
resolve: () => true,
}),
}),
});
```
### Using your own zod schemas
If you just want to use a zod schema defined somewhere else, rather than using the validation
options you can use the `schema` option:
```typescript
builder.queryType({
fields: (t) => ({
list: t.boolean({
args: {
max5: t.arg.int({
validate: {
schema: zod.number().int().max(5),
},
}),
},
resolve: () => true,
}),
}),
});
```
You can also validate all arguments together using a zod schema:
```typescript
builder.queryType({
fields: (t) => ({
simple: t.boolean({
args: {
email: t.arg.string(),
phone: t.arg.string(),
},
// Validate all args together using own zod schema
validate: {
schema: zod.object({
email: zod.string().email(),
phone: zod.string(),
}),
},
resolve: () => true,
}),
}),
});
```
## API
### On Object fields (for validating field arguments)
* `validate`: `Refinement` | `Refinement[]` | `ValidationOptions`.
### On InputObjects (for validating all fields of an input object)
* `validate`: `Refinement` | `Refinement[]` | `ValidationOptions`.
### On arguments or input object fields (for validating a specific input field or argument)
* `validate`: `Refinement` | `Refinement[]` | `ValidationOptions`.
### `Refinement`
A `Refinement` is a function that will be passed to the `zod` `refine` method. It receives the args
object, input object, or value of the specific field the refinement is defined on. It should return
a `boolean` or `Promise`.
`Refinement`s can either be just a function: `(val) => isValid(val)`, or an array with the function,
and an options object like: `[(val) => isValid(val), { message: 'field should be valid' }]`.
The options object may have a `message` property, and if the type being validated is an object, it
can also include a `path` property with an array of strings indicating the path of the field in the
object being validated. See the zod docs on `refine` for more details.
### `ValidationOptions`
The validation options available depend on the type being validated. Each property of
`ValidationOptions` can either be a value specific to the constraint, or an array with the value,
and the options passed to the underlying zod method. This options object can be used to set a custom
error message:
```typescript
{
validate: {
max: [10, { message: 'should not be more than 10' }],
int: true,
}
}
```
#### Number
* `type`?: `'number'`
* `refine`?: `Refinement | Refinement[]`
* `min`?: `Constraint`
* `max`?: `Constraint`
* `positive`?: `Constraint`
* `nonnegative`?: `Constraint`
* `negative`?: `Constraint`
* `nonpositive`?: `Constraint`
* `int`?: `Constraint`
* `schema`?: `ZodSchema`
#### BigInt
* `type`?: `'bigint'`
* `refine`?: `Refinement | Refinement[]`
* `schema`?: `ZodSchema`
#### Boolean
* `type`?: `'boolean'`
* `refine`?: `Refinement | Refinement[]`
* `schema`?: `ZodSchema`
#### Date
* `type`?: `'boolean'`
* `refine`?: `Refinement | Refinement[]`
* `schema`?: `ZodSchema`
#### String
* `type`?: `'string'`;
* `refine`?: `Refinement | Refinement[]`
* `minLength`?: `Constraint`
* `maxLength`?: `Constraint`
* `length`?: `Constraint`
* `url`?: `Constraint`
* `uuid`?: `Constraint`
* `email`?: `Constraint`
* `regex`?: `Constraint`
* `schema`?: `ZodSchema`
#### Object
* `type`?: `'object'`;
* `refine`?: `Refinement | Refinement[]`
* `schema`?: `ZodSchema`
#### Array
* `type`?: `'array'`;
* `refine`?: `Refinement | Refinement[]`
* `minLength`?: `Constraint`
* `maxLength`?: `Constraint`
* `length`?: `Constraint`
* `items`?: `ValidationOptions | Refinement`
* `schema`?: `ZodSchema`
### How it works
Each arg on an object field, and each field on an input type with validation will build its own zod
validator. These validators will be a union of all potential types that can apply the validations
defined for that field. For example, if you define an optional field with a `maxLength` validator,
it will create a zod schema that looks something like:
```typescript
zod.union([zod.null(), zod.undefined(), zod.array().maxLength(5), zod.string().maxLength(5)]);
```
If you set and `email` validation instead the schema might look like:
```typescript
zod.union([zod.null(), zod.undefined(), zod.string().email()]);
```
At runtime, we don't know anything about the types being used by your schema, we can't infer the
expected js type from the type definition, so the best we can do is limit the valid types based on
what validations they support. The `type` validation allows explicitly validating the `type` of a
field to be one of the base types supported by zod:
```typescript
// field
{
validate: {
type: 'string',
maxLength: 5
}
// generated
zod.union([zod.null(), zod.undefined(), zod.string().maxLength(5)]);
```
There are a few exceptions the above:
1. args and input fields that are `InputObject`s always use `zod.object()` rather than creating a
union of potential types.
2. args and input fields that are list types always use `zod.array()`.
3. If you only include a `refine` validation (or just pass a function directly to validate) we will
just use `zod`s unknown validator instead:
```typescript
// field
{
validate: (val) => isValid(val),
}
// generated
zod.union([zod.null(), zod.undefined(), zod.unknown().refine((val) => isValid(val))]);
```
If the validation options include a `schema` that schema will be used as an intersection wit the
generated validator:
```typescript
// field
{
validate: {
int: true,
schema: zod.number().max(10),
}
// generated
zod.union([zod.null(), zod.undefined(), zod.intersection(zod.number().max(10), zod.number().int())]);
```
### Sharing schemas with client code
The easiest way to share validators is the use the to define schemas for your fields in an external
file using the normal zod APIs, and then attaching those to your fields using the `schema` option.
```typescript
// shared
import { ValidationOptions } from '@pothos/plugin-zod';
const numberValidation = zod.number().max(5);
// server
builder.queryType({
fields: (t) => ({
example: t.boolean({
args: {
num: t.arg.int({
validate: {
schema: numberValidation,
}
}),
},
resolve: () => true,
}),
});
});
// client
numberValidator.parse(3) // pass
numberValidator.parse('3') // fail
```
You can also use the `createZodSchema` helper from the plugin directly to create zod Schemas from an
options object:
```typescript
// shared
import { ValidationOptions } from '@pothos/plugin-zod';
const numberValidation: ValidationOptions = {
max: 5,
};
// server
builder.queryType({
fields: (t) => ({
example: t.boolean({
args: {
num: t.arg.int({
validate: numberValidation,
}),
},
resolve: () => true,
}),
});
});
// client
import { createZodSchema } from '@pothos/plugin-zod';
const validator = createZodSchema(numberValidator);
validator.parse(3) // pass
validator.parse('3') // fail
```
# Connections
URL: /docs/plugins/prisma/connections
Creating relay connections with the Prisma plugin
***
title: Connections
description: Creating relay connections with the Prisma plugin
--------------------------------------------------------------
### `prismaConnection`
The `prismaConnection` method on a field builder can be used to create a relay `connection` field
that also pre-loads all the data nested inside that connection.
```typescript
builder.queryType({
fields: (t) => ({
posts: t.prismaConnection(
{
type: 'Post',
cursor: 'id',
resolve: (query, parent, args, context, info) => prisma.post.findMany({ ...query }),
},
{}, // optional options for the Connection type
{}, // optional options for the Edge type),
),
}),
});
```
#### options
* `type`: the name of the prisma model being connected to
* `cursor`: a `@unique` column of the model being connected to. This is used as the `cursor` option
passed to prisma.
* `defaultSize`: (default: 20) The default page size to use if `first` and `last` are not provided.
* `maxSize`: (default: 100) The maximum number of nodes returned for a connection.
* `resolve`: Like the resolver for `prismaField`, the first argument is a `query` object that should
be spread into your prisma query. The `resolve` function should return an array of nodes for the
connection. The `query` will contain the correct `take`, `skip`, and `cursor` options based on the
connection arguments (`before`, `after`, `first`, `last`), along with `include` options for nested
selections.
* `totalCount`: A function for loading the total count for the connection. This will add a
`totalCount` field to the connection object. The `totalCount` method will receive (`connection`,
`args`, `context`, `info`) as arguments. Note that this will not work when using a shared
connection object (see details below)
The created connection queries currently support the following combinations of connection arguments:
* `first`, `last`, or `before`
* `first` and `before`
* `last` and `after`
Queries for other combinations are not as useful, and generally requiring loading all records
between 2 cursors, or between a cursor and the end of the set. Generating query options for these
cases is more complex and likely very inefficient, so they will currently throw an Error indicating
the argument combinations are not supported.
The `maxSize` and `defaultSize` can also be configured globally using `maxConnectionSize` and
`defaultConnectionSize` options in the `prisma` plugin options.
### `relatedConnection`
The `relatedConnection` method can be used to create a relay `connection` field based on a relation
of the current model.
```typescript
builder.prismaNode('User', {
id: { field: 'id' },
fields: (t) => ({
// Connections can be very simple to define
simplePosts: t.relatedConnection('posts', {
cursor: 'id',
}),
// Or they can include custom arguments, and other options
posts: t.relatedConnection(
'posts',
{
cursor: 'id',
args: {
oldestFirst: t.arg.boolean(),
},
query: (args, context) => ({
orderBy: {
createdAt: args.oldestFirst ? 'asc' : 'desc',
},
}),
},
{}, // optional options for the Connection type
{}, // optional options for the Edge type),
),
}),
});
```
#### options
* `cursor`: a `@unique` column of the model being connected to. This is used as the `cursor` option
passed to prisma.
* `defaultSize`: (default: 20) The default page size to use if `first` and `last` are not provided.
* `maxSize`: (default: 100) The maximum number of nodes returned for a connection.
* `query`: A method that accepts the `args` and `context` for the connection field, and returns
filtering and sorting logic that will be merged into the query for the relation.
* `totalCount`: when set to true, this will add a `totalCount` field to the connection object. see
`relationCount` above for more details. Note that this will not work when using a shared
connection object (see details below)
### Indirect relations as connections
Creating connections from indirect relations is a little more involved, but can be achieved using
`prismaConnectionHelpers` with a normal `t.connection` field.
```typescript
// Create a prisma object for the node type of your connection
const Media = builder.prismaObject('Media', {
select: {
id: true,
},
fields: (t) => ({
url: t.exposeString('url'),
}),
});
// Create connection helpers for the media type. This will allow you
// to use the normal t.connection with a prisma type
const mediaConnectionHelpers = prismaConnectionHelpers(
builder,
'PostMedia', // this should be the the join table
{
cursor: 'id',
select: (nodeSelection) => ({
// select the relation to the media node using the nodeSelection function
media: nodeSelection({
// optionally specify fields to select by default for the node
select: {
id: true,
posts: true,
},
}),
}),
// resolve the node from the edge
resolveNode: (postMedia) => postMedia.media,
// additional/optional options
maxSize: 100,
defaultSize: 20,
},
);
builder.prismaObjectField('Post', 'mediaConnection', (t) =>
t.connection({
// The type for the Node
type: Media,
// since we are not using t.relatedConnection we need to manually
// include the selections for our connection
select: (args, ctx, nestedSelection) => ({
media: mediaConnectionHelpers.getQuery(args, ctx, nestedSelection),
}),
resolve: (post, args, ctx) =>
// This helper takes a list of nodes and formats them for the connection
mediaConnectionHelpers.resolve(
// map results to the list of edges
post.media,
args,
ctx,
),
}),
);
```
The above example assumes that you are paginating a relation to a join table, where the pagination
args are applied based on the relation to that join table, but the nodes themselves are nested
deeper.
`prismaConnectionHelpers` can also be used to manually create a connection where the edge and
connections share the same model, and pagination happens directly on a relation to nodes type (even
if that relation is nested).
```ts
const commentConnectionHelpers = prismaConnectionHelpers(builder, 'Comment', {
cursor: 'id',
});
const SelectPost = builder.prismaObject('Post', {
fields: (t) => ({
title: t.exposeString('title'),
comments: t.connection({
type: commentConnectionHelpers.ref,
select: (args, ctx, nestedSelection) => ({
comments: commentConnectionHelpers.getQuery(args, ctx, nestedSelection),
}),
resolve: (parent, args, ctx) => commentConnectionHelpers.resolve(parent.comments, args, ctx),
}),
}),
});
```
To add arguments for a connection defined with a helper, it is often easiest to define the arguments
on the connection field rather than the connection helper. This allows connection helpers to be
shared between fields that may not share the same arguments:
```ts
const mediaConnectionHelpers = prismaConnectionHelpers(builder, 'PostMedia', {
cursor: 'id',
select: (nodeSelection) => ({
media: nodeSelection({}),
}),
resolveNode: (postMedia) => postMedia.media,
});
builder.prismaObjectField('Post', 'mediaConnection', (t) =>
t.connection({
type: Media,
args: {
inverted: t.arg.boolean(),
},
select: (args, ctx, nestedSelection) => ({
media: {
...mediaConnectionHelpers.getQuery(args, ctx, nestedSelection),
orderBy: {
post: {
createdAt: args.inverted ? 'desc' : 'asc',
},
},
},
}),
resolve: (post, args, ctx) => mediaConnectionHelpers.resolve(post.media, args, ctx),
}),
);
```
Arguments, ordering and filtering can also be defined on the helpers themselves:
```ts
const mediaConnectionHelpers = prismaConnectionHelpers(builder, 'PostMedia', {
cursor: 'id',
// define arguments for the connection helper, these will be available as the second argument of `select`
args: (t) => ({
inverted: t.arg.boolean(),
}),
select: (nodeSelection, args) => ({
media: nodeSelection({}),
}),
query: (args) => ({
// Custom filtering with a where clause
where: {
post: {
published: true,
},
},
// custom ordering including use of args
orderBy: {
post: {
createdAt: args.inverted ? 'desc' : 'asc',
},
},
}),
resolveNode: (postMedia) => postMedia.media,
});
builder.prismaObjectField('Post', 'mediaConnection', (t) =>
t.connection({
type: Media,
// add the args from the connection helper to the field
args: mediaConnectionHelpers.getArgs(),
select: (args, ctx, nestedSelection) => ({
media: mediaConnectionHelpers.getQuery(args, ctx, nestedSelection),
}),
resolve: (post, args, ctx) => mediaConnectionHelpers.resolve(post.media, args, ctx),
}),
);
```
### Sharing Connections objects
You can create reusable connection objects by using `builder.connectionObject`.
These connection objects can be used with `t.prismaConnection`, `t.relatedConnection`, or
`t.connection`
Shared edges can also be created using `t.edgeObject`
```typescript
const CommentConnection = builder.connectionObject({
type: Comment,
// or
type: commentConnectionHelpers.ref,
name: 'CommentConnection',
});
builder.prismaObject('Post', {
fields: (t) => ({
id: t.exposeID('id'),
...
commentsConnection: t.relatedConnection(
'comments',
{ cursor: 'id' },
// The connection object ref can be passed in place of the connection object options
CommentConnection
),
}),
});
```
### Extending connection edges
In some cases you may want to expose some data from an indirect connection on the edge object.
```typescript
const mediaConnectionHelpers = prismaConnectionHelpers(builder, 'PostMedia', {
cursor: 'id',
select: (nodeSelection) => ({
// select the relation to the media node using the nodeSelection function
media: nodeSelection({}),
// Select additional fields from the join table
createdAt: true,
}),
// resolve the node from the edge
resolveNode: (postMedia) => postMedia.media,
});
builder.prismaObjectFields('Post', (t) => ({
manualMediaConnection: t.connection(
{
type: Media,
select: (args, ctx, nestedSelection) => ({
media: mediaConnectionHelpers.getQuery(args, ctx, nestedSelection),
select: {
media: nestedSelection({}, ['edges', 'node']),
},
}),
resolve: (post, args, ctx) =>
mediaConnectionHelpers.resolve(
post.media.map(({ media }) => media),
args,
ctx,
),
},
{},
// options for the edge object
{
// define the additional fields on the edge object
fields: (edge) => ({
createdAt: edge.field({
type: 'DateTime',
// the parent shape for edge fields is inferred from the connections resolve function
resolve: (media) => media.createdAt,
}),
}),
},
),
}));
```
### Total count on shared connection objects
If you are set the `totalCount: true` on a `prismaConnection` or `relatedConnection` field, and are
using a custom connection object, you will need to manually add the `totalCount` field to the
connection object manually. The parent object on the connection will have a `totalCount` property
that is either a the totalCount, or a function that will return the totalCount.
```typescript
const CommentConnection = builder.connectionObject({
type: Comment,
name: 'CommentConnection',
fields: (t) => ({
totalCount: t.int({
resolve: (connection) => {
const { totalCount } = connection as {
totalCount?: number | (() => number | Promise);
};
return typeof totalCount === 'function' ? totalCount() : totalCount;
},
}),
}),
});
```
If you want to add a global `totalCount` field, you can do something similar using
`builder.globalConnectionField`:
```typescript
export const builder = new SchemaBuilder<{
PrismaTypes: PrismaTypes;
Connection: {
totalCount: number | (() => number | Promise);
};
}>({
plugins: [PrismaPlugin, RelayPlugin],
relayOptions: {},
prisma: {
client: db,
},
});
builder.globalConnectionField('totalCount', (t) =>
t.int({
nullable: false,
resolve: (parent) =>
typeof parent.totalCount === 'function' ? parent.totalCount() : parent.totalCount,
}),
);
```
### `parsePrismaCursor` and `formatPrismaCursor`
These functions can be used to manually parse and format cursors that are compatible with prisma
connections.
Parsing a cursor will return the value from the column used for the cursor (often the `id`), this
value may be an array or object when a compound index is used as the cursor. Similarly, to format a
cursor, you must provide the column(s) that make up the cursor.
# Prisma plugin
URL: /docs/plugins/prisma
Prisma plugin docs for Pothos
***
title: Prisma plugin
description: Prisma plugin docs for Pothos
------------------------------------------
This plugin provides tighter integration with prisma, making it easier to define prisma based object
types, and helps solve n+1 queries for relations. It also has integrations for the relay plugin to
make defining nodes and connections easy and efficient.
This plugin is NOT required to use prisma with Pothos, but does make things a lot easier and more
efficient. See the [Using Prisma without a plugin](#using-prisma-without-a-plugin) section below for
more details.
## Features
* 🎨 Quickly define GraphQL types based on your Prisma models
* 🦺 Strong type-safety throughout the entire API
* 🤝 Automatically resolve relationships defined in your database
* 🎣 Automatic Query optimization to efficiently load the specific data needed to resolve a query
(solves common N+1 issues)
* 💅 Types and fields in GraphQL schema are not implicitly tied to the column names or types in your
database.
* 🔀 Relay integration for defining nodes and connections that can be efficiently loaded.
* 📚 Supports multiple GraphQL models based on the same Database model
* 🧮 Count fields can easily be added to objects and connections
## Example
Here is a quick example of what an API using this plugin might look like. There is a more thorough
breakdown of what the methods and options used in the example below.
```typescript
// Create an object type based on a prisma model
// without providing any custom type information
builder.prismaObject('User', {
fields: (t) => ({
// expose fields from the database
id: t.exposeID('id'),
email: t.exposeString('email'),
bio: t.string({
// automatically load the bio from the profile
// when this field is queried
select: {
profile: {
select: {
bio: true,
},
},
},
// user will be typed correctly to include the
// selected fields from above
resolve: (user) => user.profile.bio,
}),
// Load posts as list field.
posts: t.relation('posts', {
args: {
oldestFirst: t.arg.boolean(),
},
// Define custom query options that are applied when
// loading the post relation
query: (args, context) => ({
orderBy: {
createdAt: args.oldestFirst ? 'asc' : 'desc',
},
}),
}),
// creates relay connection that handles pagination
// using prisma's built in cursor based pagination
postsConnection: t.relatedConnection('posts', {
cursor: 'id',
}),
}),
});
// Create a relay node based a prisma model
builder.prismaNode('Post', {
id: { field: 'id' },
fields: (t) => ({
title: t.exposeString('title'),
author: t.relation('author'),
}),
});
builder.queryType({
fields: (t) => ({
// Define a field that issues an optimized prisma query
me: t.prismaField({
type: 'User',
resolve: async (query, root, args, ctx, info) =>
prisma.user.findUniqueOrThrow({
// the `query` argument will add in `include`s or `select`s to
// resolve as much of the request in a single query as possible
...query,
where: { id: ctx.userId },
}),
}),
}),
});
```
Given this schema, you would be able to resolve a query like the following with a single prisma
query (which will still result in a few optimized SQL queries).
```graphql
query {
me {
email
posts {
title
author {
id
}
}
}
}
```
A query like
```graphql
query {
me {
email
posts {
title
author {
id
}
}
oldPosts: posts(oldestFirst: true) {
title
author {
id
}
}
}
}
```
Will result in 2 calls to prisma, one to resolve everything except `oldPosts`, and a second to
resolve everything inside `oldPosts`. Prisma can only resolve each relation once in a single query,
so we need a separate to handle the second `posts` relation.
# Indirect relations
URL: /docs/plugins/prisma/indirect-relations
Indirect relations and join tables
***
title: Indirect relations
description: Indirect relations and join tables
-----------------------------------------------
## Selecting fields from a nested GraphQL field
By default, the `nestedSelection` function will return selections based on the type of the current
field. `nestedSelection` can also be used to get a selection from a field nested deeper inside other
fields. This is useful if the field returns a type that is not a `prismaObject`, but a field nested
inside the returned type is.
```typescript
const PostRef = builder.prismaObject('Post', {
fields: (t) => ({
title: t.exposeString('title'),
content: t.exposeString('content'),
author: t.relation('author'),
}),
});
const PostPreview = builder.objectRef('PostPreview').implement({
fields: (t) => ({
post: t.field({
type: PostRef,
resolve: (post) => post,
}),
preview: t.string({
nullable: true,
resolve: (post) => post.content?.slice(10),
}),
}),
});
builder.prismaObject('User', {
fields: (t) => ({
id: t.exposeID('id'),
postPreviews: t.field({
select: (args, ctx, nestedSelection) => ({
posts: nestedSelection(
{
// limit the number of postPreviews to load
take: 2,
},
// Look at the selections in postPreviews.post to determine what relations/fields to select
['post'],
// (optional) If the field returns a union or interface, you can pass a typeName to get selections for a specific object type
'Post',
),
}),
type: [PostPreview],
resolve: (user) => user.posts,
}),
}),
});
```
## Indirect relations (eg. Join tables)
If you want to define a GraphQL field that directly exposes data from a nested relationship (many to
many relations using a custom join table is a common example of this) you can use the
`nestedSelection` function passed to `select`.
Given a prisma schema like the following:
```
model Post {
id Int @id @default(autoincrement())
title String
content String
media PostMedia[]
}
model Media {
id Int @id @default(autoincrement())
url String
posts PostMedia[]
uploadedBy User @relation(fields: [uploadedById], references: [id])
uploadedById Int
}
model PostMedia {
id Int @id @default(autoincrement())
post Post @relation(fields: [postId], references: [id])
media Media @relation(fields: [mediaId], references: [id])
postId Int
mediaId Int
}
```
You can define a media field that can pre-load the correct relations based on the graphql query:
```typescript
const PostDraft = builder.prismaObject('Post', {
fields: (t) => ({
title: t.exposeString('title'),
media: t.field({
select: (args, ctx, nestedSelection) => ({
media: {
select: {
// This will look at what fields are queried on Media
// and automatically select uploadedBy if that relation is requested
media: nestedSelection(
// This arument is the default query for the media relation
// It could be something like: `{ select: { id: true } }` instead
true,
),
},
},
}),
type: [Media],
resolve: (post) => post.media.map(({ media }) => media),
}),
}),
});
const Media = builder.prismaObject('Media', {
select: {
id: true,
},
fields: (t) => ({
url: t.exposeString('url'),
uploadedBy: t.relation('uploadedBy'),
}),
});
```
# Interfaces
URL: /docs/plugins/prisma/interfaces
Creating interfaces for prisma models that can be shared by variants
***
title: Interfaces
description: Creating interfaces for prisma models that can be shared by variants
---------------------------------------------------------------------------------
`builder.prismaInterface` works just like builder.prismaObject and can be used to define either the
primary type or a variant for a model.
The following example creates a `User` interface, and 2 variants Admin and Member. The `resolveType`
method returns the typenames as strings to avoid issues with circular references.
```typescript
builder.prismaInterface('User', {
name: 'User',
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
}),
resolveType: (user) => {
return user.isAdmin ? 'Admin' : 'Member';
},
});
builder.prismaObject('User', {
variant: 'Admin',
interfaces: [User],
fields: (t) => ({
isAdmin: t.exposeBoolean('isAdmin'),
}),
});
builder.prismaObject('User', {
variant: 'Member',
interfaces: [User],
fields: (t) => ({
bio: t.exposeString('bio'),
}),
});
```
When using select mode, it's recommended to add selections to both the interface and the object
types that implement them. Selections are not inherited and will fallback to the default selection
which includes all scalar columns.
You will not be able to extend an interface for a different prisma model, doing so will result in an
error at build time.
# Prisma Objects
URL: /docs/plugins/prisma/objects
Prisma plugin docs for Pothos
***
title: Prisma Objects
description: Prisma plugin docs for Pothos
------------------------------------------
## Creating types with `builder.prismaObject`
`builder.prismaObject` takes 2 arguments:
1. `name`: The name of the prisma model this new type represents
2. `options`: options for the type being created, this is very similar to the options for any other
object type
```typescript
builder.prismaObject('User', {
// Optional name for the object, defaults to the name of the prisma model
name: 'PostAuthor',
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
}),
});
builder.prismaObject('Post', {
fields: (t) => ({
id: t.exposeID('id'),
title: t.exposeString('title'),
}),
});
```
So far, this is just creating some simple object types. They work just like any other object type in
Pothos. The main advantage of this is that we get the type information without using object refs, or
needing imports from prisma client.
## Adding prisma fields to non-prisma objects (including Query and Mutation)
There is a new `t.prismaField` method which can be used to define fields that resolve to your prisma
types:
```typescript
builder.queryType({
fields: (t) => ({
me: t.prismaField({
type: 'User',
resolve: async (query, root, args, ctx, info) =>
prisma.user.findUniqueOrThrow({
...query,
where: { id: ctx.userId },
}),
}),
}),
});
```
This method works just like the normal `t.field` method with a couple of differences:
1. The `type` option must contain the name of the prisma model (eg. `User` or `[User]` for a list
field).
2. The `resolve` function has a new first argument `query` which should be spread into query prisma
query. This will be used to load data for nested relationships.
You do not need to use this method, and the `builder.prismaObject` method returns an object ref than
can be used like any other object ref (with `t.field`), but using `t.prismaField` will allow you to
take advantage of more efficient queries.
The `query` object will contain an object with `include` or `select` options to pre-load data needed
to resolve nested parts of the current query. The included/selected fields are based on which fields
are being queried, and the options provided when defining those fields and types.
## Extending prisma objects
The normal `builder.objectField(s)` methods can be used to extend prisma objects, but do not support
using selections, or exposing fields not in the default selection. To use these features, you can
use
`builder.prismaObjectField` or `builder.prismaObjectFields` instead.
# Prisma Utils
URL: /docs/plugins/prisma/prisma-utils
Prisma utils for creating input types
***
title: Prisma Utils
description: Prisma utils for creating input types
--------------------------------------------------
import { Callout } from 'fumadocs-ui/components/callout';
This package is highly experimental and not recommended for production use
The plugin adds new helpers for creating prisma compatible input types. It is NOT required to use
the normal prisma plugin.
## Setup
To use this plugin, you will need to enable prismaUtils option in the generator in your
schema.prisma:
```prisma
generator pothos {
provider = "prisma-pothos-types"
// Enable prismaUtils feature
prismaUtils = true
}
```
Once this is enabled, you can add the plugin to your schema along with the normal prisma plugin:
```ts
import SchemaBuilder from '@pothos/core';
import { PrismaClient } from '@prisma/client';
import type PrismaTypes from '@pothos/plugin-prisma/generated';
import PrismaPlugin from '@pothos/plugin-prisma';
import PrismaUtils from '@pothos/plugin-prisma-utils';
export const prisma = new PrismaClient({});
export default new SchemaBuilder<{
Scalars: {
DateTime: {
Input: Date;
Output: Date;
};
};
PrismaTypes: PrismaTypes;
}>({
plugins: [PrismaPlugin, PrismaUtils],
prisma: {
client: prisma,
},
});
```
## What can you do with this plugin
Currently this plugin is focused on making it easier to define prisma compatible input types that
take advantage of the types defined in your Prisma schema.
The goal is not to generate all input types automatically, but rather to provide building blocks so
that writing your own helpers or code-generators becomes a lot easier. There are far too many
tradeoffs and choices to be made when designing input types for queries that one solution won't work
for everyone.
This plugin will eventually provide more helpers and examples that should allow anyone to quickly
set something up to automatically creates all their input types (and eventually other crud
operations).
## What is supported so far
### Creating filter types for scalars and enums
```typescript
const StringFilter = builder.prismaFilter('String', {
ops: ['contains', 'equals', 'startsWith', 'not'],
});
export const IDFilter = builder.prismaFilter('Int', {
ops: ['equals', 'not'],
});
builder.enumType(MyEnum, { name: 'MyEnum' });
const MyEnumFilter = builder.prismaFilter(MyEnum, {
ops: ['not', 'equals'],
});
```
### Creating filters for Prisma objects (compatible with a "where" clause)
```typescript
const UserWhere = builder.prismaWhere('User', {
fields: {
id: IDFilter,
},
});
const PostFilter = builder.prismaWhere('Post', {
fields: (t) => ({
// You can use either filters
id: IDFilter,
// or scalar types to only support equality
title: 'String',
createdAt: 'DateTime',
// Relations are supported by referencing other scalars
author: UserFilter,
// use t.field to provide other field options
authorId: t.field({ type: IDFilter, description: 'filter by author id' }),
}),
});
```
### Creating list filters for scalars
```typescript
export const StringListFilter = builder.prismaScalarListFilter('String', {
name: 'StringListFilter',
ops: ['has', 'hasSome', 'hasEvery', 'isEmpty', 'equals'],
});
```
### Creating list filters for Prisma objects
```typescript
const UserListFilter = builder.prismaListFilter(UserWhere, {
ops: ['every', 'some', 'none'],
});
```
### Creating OrderBy input types
```typescript
const UserOrderBy = builder.prismaOrderBy('User', {
fields: {
name: true,
},
});
export const PostOrderBy = builder.prismaOrderBy('Post', {
fields: () => ({
id: true,
title: true,
createdAt: true,
author: UserOrderBy,
}),
});
```
### Inputs for create mutations
You can use `builder.prismaCreate` to create input types for create mutations.
To get these types to work correctly for circular references, it is recommended to add explicit type
annotations, but for simple types that do not have circular references the explicit types can be
omitted.
```ts
import { InputObjectRef } from '@pothos/core';
import { Prisma } from '@prisma/client';
export const UserCreate: InputObjectRef = builder.prismaCreate('User', {
name: 'UserCreate',
fields: () => ({
// scalars
id: 'Int',
email: 'String',
name: 'String',
// inputs for relations need to be defined separately as shown below
profile: UserCreateProfile,
// create fields for list relations are defined just like normal relations.
// Pothos will automatically handle making the inputs lists
posts: UserCreatePosts,
}),
});
export const UserCreateProfile = builder.prismaCreateRelation('User', 'profile', {
fields: () => ({
// created with builder.prismaCreate as shown above for User
create: ProfileCreateWithoutUser,
// created with builder.prismaWhere
connect: ProfileUniqueFilter,
}),
});
export const UserCreatePosts = builder.prismaCreateRelation('User', 'posts', {
fields: () => ({
// created with builder.prismaCreate as shown above for User
create: PostCreateWithoutAuthor,
// created with builder.prismaWhere
connect: PostUniqueFilter,
}),
});
```
### Inputs for update mutations
You can use `builder.prismaUpdate` to Update input types for update mutations.
To get these types to work correctly for circular references, it is recommended to add explicit type
annotations, but for simple types that do not have circular references the explicit types can be
omitted.
```ts
export const UserUpdate: InputObjectRef = builder.prismaUpdate(
'User',
{
name: 'UserUpdate',
fields: () => ({
id: 'Int',
email: 'String',
name: 'String',
// inputs for relations need to be defined separately as shown below
profile: UserUpdateProfile,
posts: UserUpdatePosts,
}),
},
);
export const UserUpdateProfile = builder.prismaUpdateRelation('User', 'profile', {
fields: () => ({
// created with builder.prismaCreate
create: ProfileCreateWithoutUser,
// created with builder.prismaUpdate
update: ProfileUpdateWithoutUser,
// created with builder.prismaWhereUnique
connect: ProfileUniqueFilter,
}),
});
export const UserUpdatePosts = builder.prismaUpdateRelation('User', 'posts', {
fields: () => ({
// Not all update methods need to be defined
// created with builder.prismaCreate
create: PostCreateWithoutAuthor,
// created with builder.prismaCreateMany
createMany: {
skipDuplicates: 'Boolean',
data: PostCreateManyWithoutAuthor,
},
// created with builder.prismaWhereUnique
set: PostUniqueFilter,
// created with builder.prismaWhereUnique
disconnect: PostUniqueFilter,
delete: PostUniqueFilter,
connect: PostUniqueFilter,
update: {
// created with builder.prismaWhereUnique
where: PostUniqueFilter,
// created with builder.prismaUpdate
data: PostUpdateWithoutAuthor,
},
updateMany: {
// created with builder.prismaWhere
where: PostWithoutAuthorFilter,
// created with builder.prismaUpdate
data: PostUpdateWithoutAuthor,
},
// created with builder.prismaWhere
deleteMany: PostWithoutAuthorFilter,
}),
});
```
#### Atomic Int Update operations
```ts
const IntUpdate = builder.prismaIntAtomicUpdate();
// or with options
const IntUpdate = builder.prismaIntAtomicUpdate({
name: 'IntUpdate',
ops: ['increment', 'decrement'],
});
export const PostUpdate = builder.prismaUpdate('Post', {
name: 'PostUpdate',
fields: () => ({
title: 'String',
views: IntUpdate,
}),
});
```
## Generators
Manually defining all the different input types shown above for a large number of tables can become
very repetitive. These utilities are designed to be building blocks for generators or utility
functions, so that you don't need to hand write these types yourself.
Pothos does not currently ship an official generator for prisma types, but there are a couple of
example generators that can be copied and modified to suite your needs. These are intentionally
somewhat limited in functionality and not written to be easily exported because they will be updated
with breaking changes as these utilities are developed further. They are only intended as building
blocks for you to build you own generators.
There are 2 main approaches:
1. Static Generation: Types are generated and written as a typescript file which can be imported
from as part of your schema
2. Dynamic Generation: Types are generated dynamically at runtime through helpers imported from your
App
### Static generator
You can find an
[example static generator here](https://github.com/hayes/pothos/blob/main/packages/plugin-prisma-utils/tests/examples/codegen/generator.ts)
This generator will generate a file with input types for every table in your schema as shown
[here](https://github.com/hayes/pothos/blob/main/packages/plugin-prisma-utils/tests/examples/codegen/schema/prisma-inputs.ts)
These generated types can be used in your schema as shown
[here](https://github.com/hayes/pothos/blob/main/packages/plugin-prisma-utils/tests/examples/codegen/schema/index.ts)
### Dynamic generator
You can find an example
[dynamic generator here](https://github.com/hayes/pothos/blob/main/packages/plugin-prisma-utils/tests/examples/crud/generator.ts)
This generator exports a class that can be used to dynamically create input types for your builder
as shown
[here](https://github.com/hayes/pothos/blob/main/packages/plugin-prisma-utils/tests/examples/crud/schema/index.ts#L9-L20)
# Relations
URL: /docs/plugins/prisma/relations
Adding relations to prism objects
***
title: Relations
description: Adding relations to prism objects
----------------------------------------------
You can add fields for relations using the `t.relation` method:
```typescript
builder.queryType({
fields: (t) => ({
me: t.prismaField({
type: 'User',
resolve: async (query, root, args, ctx, info) =>
prisma.user.findUniqueOrThrow({
...query,
where: { id: ctx.userId },
}),
}),
}),
});
builder.prismaObject('User', {
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
posts: t.relation('posts'),
}),
});
builder.prismaObject('Post', {
fields: (t) => ({
id: t.exposeID('id'),
title: t.exposeString('title'),
author: t.relation('author'),
}),
});
```
`t.relation` defines a field that can be pre-loaded by a parent resolver. This will create something
like `{ include: { author: true }}` that will be passed as part of the `query` argument of a
`prismaField` resolver. If the parent is another `relation` field, the includes will become nested,
and the full relation chain will be passed to the `prismaField` that started the chain.
For example the query:
```graphql
query {
me {
posts {
author {
id
}
}
}
}
```
the `me` `prismaField` would receive something like the following as its query parameter:
```typescript
{
include: {
posts: {
include: {
author: true;
}
}
}
}
```
This will work perfectly for the majority of queries. There are a number of edge cases that make it
impossible to resolve everything in a single query. When this happens Pothos will automatically
construct an additional query to ensure that everything is still loaded correctly, and split into as
few efficient queries as possible. This process is described in more detail below
### Fallback queries
There are some cases where data can not be pre-loaded by a prisma field. In these cases, pothos will
issue a `findUnique` query for the parent of any fields that were not pre-loaded, and select the
missing relations so those fields can be resolved with the correct data. These queries should be
very efficient, are batched by pothos to combine requirements for multiple fields into one query,
and batched by Prisma to combine multiple queries (in an n+1 situation) to a single sql query.
The following are some edge cases that could cause an additional query to be necessary:
* The parent object was not loaded through a field defined with `t.prismaField`, or `t.relation`
* The root `prismaField` did not correctly spread the `query` arguments in is prisma call.
* The query selects multiple fields that use the same relation with different filters, sorting, or
limits
* The query contains multiple aliases for the same relation field with different arguments in a way
that results in different query options for the relation.
* A relation field has a query that is incompatible with the default includes of the parent object
All of the above should be relatively uncommon in normal usage, but the plugin ensures that these
types of edge cases are automatically handled when they do occur.
### Filters, Sorting, and arguments
So far we have been describing very simple queries without any arguments, filtering, or sorting. For
`t.prismaField` definitions, you can add arguments to your field like normal, and pass them into
your prisma query as needed. For `t.relation` the flow is slightly different because we are not
making a prisma query directly. We do this by adding a `query` option to our field options. Query
can either be a query object, or a method that returns a query object based on the field arguments.
```typescript
builder.prismaObject('User', {
fields: (t) => ({
id: t.exposeID('id'),
posts: t.relation('posts', {
// We can define arguments like any other field
args: {
oldestFirst: t.arg.boolean(),
},
// Then we can generate our query conditions based on the arguments
query: (args, context) => ({
orderBy: {
createdAt: args.oldestFirst ? 'asc' : 'desc',
},
}),
}),
}),
});
```
The returned query object will be added to the include section of the `query` argument that gets
passed into the first argument of the parent `t.prismaField`, and can include things like `where`,
`skip`, `take`, and `orderBy`. The `query` function will be passed the arguments for the field, and
the context for the current request. Because it is used for pre-loading data, and solving n+1
issues, it can not be passed the `parent` object because it may not be loaded yet.
```typescript
builder.prismaObject('User', {
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
posts: t.relation('posts', {
// We can define arguments like any other field
args: {
oldestFirst: t.arg.boolean(),
},
// Then we can generate our query conditions based on the arguments
query: (args, context) => ({
orderBy: {
createdAt: args.oldestFirst ? 'asc' : 'desc',
},
}),
}),
}),
});
```
### relationCount
Prisma supports querying for
[relation counts](https://www.prisma.io/docs/concepts/components/prisma-client/aggregation-grouping-summarizing#count-relations)
which allow including counts for relations along side other `includes`. Before prisma 4.2.0, this
does not support any filters on the counts, but can give a total count for a relation. Starting from
prisma 4.2.0, filters on relation count are available under the `filteredRelationCount` preview
feature flag.
```typescript
builder.prismaObject('User', {
fields: (t) => ({
id: t.exposeID('id'),
postCount: t.relationCount('posts', {
where: {
published: true,
},
}),
}),
});
```
# Relay
URL: /docs/plugins/prisma/relay
Using the Prisma and Relay plugins together
***
title: Relay
description: Using the Prisma and Relay plugins together
--------------------------------------------------------
This plugin has extensive integration with the
[relay plugin](https://pothos-graphql.dev/docs/plugins/relay), which makes creating nodes and
connections very easy.
### `prismaNode`
The `prismaNode` method works just like the `prismaObject` method with a couple of small
differences:
* there is a new `id` option that mirrors the `id` option from `node` method of the relay plugin,
and must contain a resolve function that returns the id from an instance of the node. Rather than
defining a resolver for the id field, you can set the `field` option to the name of a unique
column or index.
```typescript
builder.prismaNode('Post', {
// This set's what database field to use for the nodes id field
id: { field: 'id' },
// fields work just like they do for builder.prismaObject
fields: (t) => ({
title: t.exposeString('title'),
author: t.relation('author'),
}),
});
```
If you need to customize how ids are formatted, you can add a resolver for the `id`, and provide a
`findUnique` option that can be used to load the node by it's id. This is generally not necessary.
```typescript
builder.prismaNode('Post', {
id: { resolve: (post) => String(post.id) },
// The return value will be passed as the `where` of a `prisma.post.findUnique`
findUnique: (id) => ({ id: Number.parseInt(id, 10) }),
fields: (t) => ({
title: t.exposeString('title'),
author: t.relation('author'),
}),
});
```
When executing the `node(id: ID!)` query with a global ID for which prisma cannot find a record in
the database, the default behavior is to throw an error. There are some scenarios where it is
preferable to return `null` instead of throwing an error. For this you can add the `nullable: true`
option:
```typescript
builder.prismaNode('Post', {
id: { resolve: (post) => String(post.id) },
nullable: true,
fields: (t) => ({
title: t.exposeString('title'),
author: t.relation('author'),
}),
});
```
# Selections
URL: /docs/plugins/prisma/selections
how to use custom includes and selections to optimize your prisma queries
***
title: Selections
description: how to use custom includes and selections to optimize your prisma queries
--------------------------------------------------------------------------------------
## Includes on types
In some cases, you may want to always pre-load certain relations. This can be helpful for defining
fields directly on type where the underlying data may come from a related table.
```typescript
builder.prismaObject('User', {
// This will always include the profile when a user object is loaded. Deeply nested relations can
// also be included this way.
include: {
profile: true,
},
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
bio: t.string({
// The profile relation will always be loaded, and user will now be typed to include the
// profile field so you can return the bio from the nested profile relation.
resolve: (user) => user.profile.bio,
}),
}),
});
```
## Select mode for types
By default, the prisma plugin will use `include` when including relations, or generating fallback
queries. This means we are always loading all columns of a table when loading it in a
`t.prismaField` or a `t.relation`. This is usually what we want, but in some cases, you may want to
select specific columns instead. This can be useful if you have tables with either a very large
number of columns, or specific columns with large payloads you want to avoid loading.
To do this, you can add a `select` instead of an include to your `prismaObject`:
```typescript
builder.prismaObject('User', {
select: {
id: true,
},
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
}),
});
```
The `t.expose*` and `t.relation` methods will all automatically add selections for the exposed
fields *WHEN THEY ARE QUERIED*, ensuring that only the requested columns will be loaded from the
database.
In addition to the `t.expose` and `t.relation`, you can also add custom selections to other fields:
```typescript
builder.prismaObject('User', {
select: {
id: true,
},
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
bio: t.string({
// This will select user.profile.bio when the the `bio` field is queried
select: {
profile: {
select: {
bio: true,
},
},
},
resolve: (user) => user.profile.bio,
}),
}),
});
```
## Using arguments or context in your selections
The following is a slightly contrived example, but shows how arguments can be used when creating a
selection for a field:
```typescript
const PostDraft = builder.prismaObject('Post', {
fields: (t) => ({
title: t.exposeString('title'),
commentFromDate: t.string({
args: {
date: t.arg({ type: 'Date', required: true }),
},
select: (args) => ({
comments: {
take: 1,
where: {
createdAt: {
gt: args.date,
},
},
},
}),
resolve: (post) => post.comments[0]?.content,
}),
}),
});
```
## Optimized queries without `t.prismaField`
In some cases, it may be useful to get an optimized query for fields where you can't use
`t.prismaField`.
This may be required for combining with other plugins, or because your query does not directly
return a `PrismaObject`. In these cases, you can use the `queryFromInfo` helper. An example of this
might be a mutation that wraps the prisma object in a result type.
```typescript
const Post = builder.prismaObject('Post', {...});
builder.objectRef<{
success: boolean;
post?: Post
}>('CreatePostResult').implement({
fields: (t) => ({
success: t.boolean(),
post: t.field({
type: Post,
nullable:
resolve: (result) => result.post,
}),
}),
});
builder.mutationField(
'createPost',
{
args: (t) => ({
title: t.string({ required: true }),
...
}),
},
{
resolve: async (parent, args, context, info) => {
if (!validateCreatePostArgs(args)) {
return {
success: false,
}
}
const post = prisma.city.create({
...queryFromInfo({
context,
info,
// nested path where the selections for this type can be found
path: ['post']
// optionally you can pass a custom initial selection, generally you wouldn't need this
// but if the field at `path` is not selected, the initial selection set may be empty
select: {
comments: true,
},
}),
data: {
title: args.input.title,
...
},
});
return {
success: true,
post,
}
},
},
);
```
# Setup
URL: /docs/plugins/prisma/setup
Setting up the Prisma plugin
***
title: Setup
description: Setting up the Prisma plugin
-----------------------------------------
```package-install
npm install --save @pothos/plugin-prisma
```
## Setup
This plugin requires a little more setup than other plugins because it integrates with the prisma to
generate some types that help the plugin better understand your prisma schema. Previous versions of
this plugin used to infer all required types from the prisma client itself, but this resulted in a
poor dev experience because the complex types slowed down editors, and some more advanced use cases
could not be typed correctly.
### Add a the `pothos` generator to your prisma schema
```prisma
generator pothos {
provider = "prisma-pothos-types"
}
```
Now the types Pothos uses will be generated whenever you re-generate your prisma client. Run the
following command to re-generate the client and create the new types:
```sh
npx prisma generate
```
additional options:
* `clientOutput`: Where the generated code will import the PrismaClient from. The default is the
full path of wherever the client is generated. If you are checking in the generated file, using
`@prisma/client` is a good option.
* `output`: Where to write the generated types
Example with more options:
```prisma
generator pothos {
provider = "prisma-pothos-types"
clientOutput = "@prisma/client"
output = "./pothos-types.ts"
}
```
When using the new `prisma-client` generator, the `clientOutput` option should match the `output`
option of the `prisma-client` generator:
```prisma
generator client {
provider = "prisma-client"
output = "./prisma-client"
}
generator pothos {
provider = "prisma-pothos-types"
clientOutput = "./prisma-client"
output = "./pothos-types.ts"
}
```
### Set up the builder
```typescript
import SchemaBuilder from '@pothos/core';
import { PrismaClient } from '@prisma/client';
import PrismaPlugin from '@pothos/plugin-prisma';
// This is the default location for the generator, but this can be
// customized as described above.
// Using a type only import will help avoid issues with undeclared
// exports in esm mode
import type PrismaTypes from '@pothos/plugin-prisma/generated';
const prisma = new PrismaClient({});
const builder = new SchemaBuilder<{
PrismaTypes: PrismaTypes;
}>({
plugins: [PrismaPlugin],
prisma: {
client: prisma,
// defaults to false, uses /// comments from prisma schema as descriptions
// for object types, relations and exposed fields.
// descriptions can be omitted by setting description to false
exposeDescriptions: boolean | { models: boolean, fields: boolean },
// use where clause from prismaRelatedConnection for totalCount (defaults to true)
filterConnectionTotalCount: true,
// warn when not using a query parameter correctly
onUnusedQuery: process.env.NODE_ENV === 'production' ? null : 'warn',
},
});
```
It is strongly recommended NOT to put your prisma client into `Context`. This will result in slower
type-checking and a laggy developer experience in VSCode. See
[this issue](https://github.com/microsoft/TypeScript/issues/45405) for more details.
You can also load or create the prisma client dynamically for each request. This can be used to
periodically re-create clients or create read-only clients for certain types of users.
```typescript
import SchemaBuilder from '@pothos/core';
import { PrismaClient, Prisma } from '@prisma/client';
import PrismaPlugin from '@pothos/plugin-prisma';
import type PrismaTypes from '@pothos/plugin-prisma/generated';
const prisma = new PrismaClient({});
const readOnlyPrisma = new PrismaClient({
datasources: {
db: {
url: process.env.READ_ONLY_REPLICA_URL,
},
},
});
const builder = new SchemaBuilder<{
Context: { user: { isAdmin: boolean } };
PrismaTypes: PrismaTypes;
}>({
plugins: [PrismaPlugin],
prisma: {
client: (ctx) => (ctx.user.isAdmin ? prisma : readOnlyPrisma),
// Because the prisma client is loaded dynamically, we need to explicitly provide the some information about the prisma schema
dmmf: Prisma.dmmf,
},
});
```
## Edge run-times
When prisma is built for edge run-times like cloudflare workers, the prisma client no-longer exposes
the dmmf datamodel Pothos uses when building the schema. To work around this, you can have the
pothos generator generate the datamodel instead:
```prisma
generator pothos {
provider = "prisma-pothos-types"
clientOutput = "@prisma/client"
output = "./pothos-types.ts"
generateDatamodel = true
documentation = false
}
```
When using the `generateDatamodel` option, the prisma client will add a `getDatamodel` function in
the generated output. When using this option, you should be using a `.ts` file rather than a `.d.ts`
file for the output.
When setting up the builder, you can now use the `getDatamodel` function:
```typescript
import SchemaBuilder from '@pothos/core';
import { PrismaClient, Prisma } from '@prisma/client';
import PrismaPlugin from '@pothos/plugin-prisma';
import type PrismaTypes from '@pothos/plugin-prisma/generated';
import { getDatamodel } from '@pothos/plugin-prisma/generated';
const prisma = new PrismaClient({});
const builder = new SchemaBuilder<{
Context: { user: { isAdmin: boolean } };
PrismaTypes: PrismaTypes;
}>({
plugins: [PrismaPlugin],
prisma: {
client: prisma,
dmmf: getDatamodel(),
},
});
```
## Detecting unused query arguments
Forgetting to spread the `query` argument from `t.prismaField` or `t.prismaConnection` into your
prisma query can result in inefficient queries, or even missing data. To help catch these issues,
the plugin can warn you when you are not using the query argument correctly.
the `onUnusedQuery` option can be set to `warn` or `error` to enable this feature. When set to
`warn` it will log a warning to the console if Pothos detects that you have not properly used the
query in your resolver. Similarly if you set the option to `error` it will throw an error instead.
You can also pass a function which will receive the `info` object which can be used to log or throw
your own error.
This check is fairly naive and works by wrapping the properties on the query with a getter that sets
a flag if the property is accessed. If no properties are accessed on the query object before the
resolver returns, it will trigger the `onUnusedQuery` condition.
It's recommended to enable this check in development to more quickly find potential issues.
# Type variants
URL: /docs/plugins/prisma/variants
How to define multiple GraphQL types based on the same prisma model
***
title: Type variants
description: How to define multiple GraphQL types based on the same prisma model
--------------------------------------------------------------------------------
The prisma plugin supports defining multiple GraphQL types based on the same prisma model.
Additional types are called `variants`. You will always need to have a "Primary" variant (defined as
described above). Additional variants can be defined by providing a `variant` option instead of a
`name` option when creating the type:
```typescript
const Viewer = builder.prismaObject('User', {
variant: 'Viewer',
fields: (t) => ({
id: t.exposeID('id'),
});
});
```
You can define variant fields that reference one variant from another:
```typescript
const Viewer = builder.prismaObject('User', {
variant: 'Viewer',
fields: (t) => ({
id: t.exposeID('id'),
// Using the model name ('User') will reference the primary variant
user: t.variant('User'),
});
});
const User = builder.prismaNode('User', {
id: {
resolve: (user) => user.id,
},
fields: (t) => ({
// To reference another variant, use the returned object Ref instead of the model name:
viewer: t.variant(Viewer, {
// return null for viewer if the parent User is not the current user
isNull: (user, args, ctx) => user.id !== ctx.user.id,
}),
email: t.exposeString('email'),
}),
});
```
You can also use variants when defining relations by providing a `type` option:
```typescript
const PostDraft = builder.prismaNode('Post', {
variant: 'PostDraft'
// This set's what database field to use for the nodes id field
id: { field: 'id' },
// fields work just like they do for builder.prismaObject
fields: (t) => ({
title: t.exposeString('title'),
author: t.relation('author'),
}),
});
const Viewer = builder.prismaObject('User', {
variant: 'Viewer',
fields: (t) => ({
id: t.exposeID('id'),
drafts: t.relation('posts', {
// This will cause this relation to use the PostDraft variant rather than the default Post variant
type: PostDraft,
query: { where: { draft: true } },
}),
});
});
```
You may run into circular reference issues if you use 2 prisma object refs to reference each other.
To avoid this, you can split out the field definition for one of the relationships using
`builder.prismaObjectField`
```typescript
const Viewer = builder.prismaObject('User', {
variant: 'Viewer',
fields: (t) => ({
id: t.exposeID('id'),
user: t.variant(User),
});
});
const User = builder.prismaNode('User', {
interfaces: [Named],
id: {
resolve: (user) => user.id,
},
fields: (t) => ({
email: t.exposeString('email'),
}),
});
// Viewer references the `User` ref in its field definition,
// referencing the `User` in fields would cause a circular type issue
builder.prismaObjectField(Viewer, 'user', t.variant(User));
```
This same workaround applies when defining relations using variants.
# Prisma without a plugin
URL: /docs/plugins/prisma/without-a-plugin
Using Prisma without a plugin
***
title: Prisma without a plugin
description: Using Prisma without a plugin
------------------------------------------
Using prisma without a plugin is relatively straight forward using the `builder.objectRef` method.
The easiest way to create types backed by prisma looks something like:
```typescript
import { Post, PrismaClient, User } from '@prisma/client';
const db = new PrismaClient();
const UserObject = builder.objectRef('User');
const PostObject = builder.objectRef('Post');
UserObject.implement({
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
posts: t.field({
type: [PostObject],
resolve: (user) =>
db.post.findMany({
where: { authorId: user.id },
}),
}),
}),
});
PostObject.implement({
fields: (t) => ({
id: t.exposeID('id'),
title: t.exposeString('title'),
author: t.field({
type: UserObject,
resolve: (post) => db.user.findUniqueOrThrow({ where: { id: post.authorId } }),
}),
}),
});
builder.queryType({
fields: (t) => ({
me: t.field({
type: UserObject,
resolve: (root, args, ctx) => db.user.findUniqueOrThrow({ where: { id: ctx.userId } }),
}),
}),
});
```
This sets up User, and Post objects with a few fields, and a `me` query that returns the current
user. There are a few things to note in this setup:
1. We split up the `builder.objectRef` and the `implement` calls, rather than calling
`builder.objectRef(...).implement(...)`. This prevents typescript from getting tripped up by the
circular references between posts and users.
2. We use `findUniqueOrThrow` because those fields are not nullable. Using `findUnique`, prisma will
return a null if the object is not found. An alternative is to mark these fields as nullable.
3. The refs to our object types are called `UserObject` and `PostObject`, this is because `User` and
`Post` are the names of the types imported from prisma. We could instead alias the types when we
import them so we can name the refs to our GraphQL types after the models.
This setup is fairly simple, but it is easy to see the n+1 issues we might run into. Prisma helps
with this by batching queries together, but there are also things we can do in our implementation to
improve things.
One thing we could do if we know we will usually be loading the author any time we load a post is to
make the author part of shape required for a post:
```typescript
const UserObject = builder.objectRef('User');
// We add the author here in the objectRef
const PostObject = builder.objectRef('Post');
UserObject.implement({
fields: (t) => ({
id: t.exposeID('id'),
email: t.exposeString('email'),
posts: t.field({
type: [PostObject],
resolve: (user) =>
db.post.findMany({
// We now need to include the author when we query for posts
include: {
author: true,
},
where: { authorId: user.id },
}),
}),
}),
});
PostObject.implement({
fields: (t) => ({
id: t.exposeID('id'),
title: t.exposeString('title'),
author: t.field({
type: UserObject,
// Now we can just return the author from the post instead of querying for it
resolve: (post) => post.author,
}),
}),
});
```
We may not always want to query for the author though, so we could make the author optional and fall
back to using a query if it was not provided by the parent resolver:
```typescript
const PostObject = builder.objectRef('Post');
PostObject.implement({
fields: (t) => ({
id: t.exposeID('id'),
title: t.exposeString('title'),
author: t.field({
type: UserObject,
resolve: (post) =>
post.author ?? db.user.findUnique({ rejectOnNotFound: true, where: { id: post.authorId } }),
}),
}),
});
```
With this setup, a parent resolver has the option to include the author, but we have a fallback
incase it does not.
There are other patterns like data loaders than can be used to reduce n+1 issues, and make your
graph more efficient, but they are too complex to describe here.