⛏️checToxicityTest

Description

Check toxicity of a piece of text. This functionality is developed using [this](https://huggingface.co/unitary/toxic-bert) model.

Parameters

ParameterTypeDescription

prompt

string

a piece of text

Response

ParameterTypeDescription

message

string

message about the response

status

int

1 for successful execution

-1 for any error occurs during the execution of the request

response

object

this object will have six properties, each is a class of toxicity, the value will be true in case of toxic and false in case of not toxic.

Example Request and Response

Prerequisites

Before making requests with NEST® SDK, you must have it installed.

You can install NEST® SDK using either npm or yarn. Use the following commands to install NEST® SDK:

npm install @nest25/ai-core-sdk
OR
yarn add @nest25/ai-core-sdk

Request

Here is an example of how to make a checkToxicityTextrequest using the NEST® SDK:

// import the ai-core-sdk
import {AIServices} from '@nest25/ai-core-sdk';

// create a new instance of the sdk
const aiServices = new AIServices();

async function main() {
  // get the result of the test
  const result = await aiServices.checkToxicityText('this is a prompt');
  console.log(result);
}

main();

Response

{
    "message": "Request successful",
    "response": {
        "identity_hate": true,
        "insult": false,
        "obscene": false,
        "severe_toxic": false,
        "threat": false,
        "toxic": true
    },
    "status": 1
}

Last updated