{"id":14692,"date":"2024-08-28T10:15:59","date_gmt":"2024-08-28T09:15:59","guid":{"rendered":"https:\/\/ee.yelkdev.site\/?p=14692"},"modified":"2025-03-27T12:54:40","modified_gmt":"2025-03-27T12:54:40","slug":"safer-llm-responses-using-build-in-guardrails","status":"publish","type":"post","link":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/","title":{"rendered":"Safer LLM responses using build-in guardrails"},"content":{"rendered":"<p>To prevent the generation of harmful responses, most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.<\/p>\n<p>Large language models (LLMs) like ChatGPT are trained to give us helpful answers while refusing to provide harmful responses. That\u2019s important because when you\u2019re providing information based on everything available on the Internet there are obvious safety concerns. Not only could an LLM generate hate speech or misinformation, it could also provide dangerous information such as how to make a weapon.<\/p>\n<p>To prevent the generation of harmful responses, most generative AI uses something called \u2018guardrails\u2019 &#8211; in this article we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.<\/p>\n<h2>How refusal works in an LLM<\/h2>\n<p>Refusal is a type of guardrail where the LLM can decline to generate a specific response based on a set of defined criteria that are deemed harmful. We can create refusal guardrails in several ways, such as:<\/p>\n<ul>\n<li aria-level=\"1\">Filtering and de-biasing the training data<\/li>\n<li aria-level=\"1\">Fine-tuning the model to recognise inappropriate content<\/li>\n<li aria-level=\"1\">Adding moderation layers to the response output<\/li>\n<\/ul>\n<p>Refusal can be an effective way of moderating LLM responses. For example, here\u2019s a prompt that I put into ChatGPT:<\/p>\n<img decoding=\"async\" src=\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png\" alt=\"\" width=\"1194\" height=\"476\" \/>\n<p>The refusal is brilliant and it\u2019s important that guardrails like this exist to prevent harmful responses, but we must be mindful of LLM jailbreaking. This is a process that is designed to use prompts that evade the LLM\u2019s trained guardrails.<\/p>\n<p>In this example, I simply gave the LLM a second, more benign prompt to change the context of my initial query:<\/p>\n<img decoding=\"async\" src=\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-5-899x1024.png\" alt=\"\" width=\"899\" height=\"1024\" \/>\n<p>How did this query evade the guardrails? LLMs utilise the latent space to understand and map the relationship between words and their context. My initial query triggered a specific pathway that the LLM deemed \u2018harmful\u2019 but the second prompt was different enough to generate a different response within the guardrails.<\/p>\n<h2>What does this all mean?<\/h2>\n<p>More research into the relationship between an LLM\u2019s responses and latent space is urgently needed. Greater understanding and control of mechanisms like refusal would make it easier to directly moderate an LLM, rather than relying on secondary fine-turning or output moderation.<\/p>\n<p>A 2024 paper by <a href=\"https:\/\/arxiv.org\/abs\/2406.11717\" target=\"_blank\" rel=\"noopener\">Arditi et al.<\/a> explores the inner workings of an LLM when generating harmful versus safe responses. In the paper, pairs of harmful and safe instructions were run through various LLMs, in which areas of the model activated were captured and analysed to understand the latent space around refusal.<\/p>\n<p>This resulted in identifying a vector, which if subtracted from the model&#8217;s residuals, removes all ability to refuse harmful instructions &#8211; permanently jailbreaking the model. Conversely, if this vector is added to the model it will refuse even safe instructions.<\/p>\n<img decoding=\"async\" class=\"aligncenter wp-image-14696 size-full\" src=\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-4.png\" alt=\"\" width=\"1110\" height=\"380\" srcset=\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-4.png 1110w, https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-4-300x103.png 300w, https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-4-768x263.png 768w\" sizes=\"(max-width: 1110px) 100vw, 1110px\" \/>\n<img decoding=\"async\" class=\"aligncenter wp-image-14695 size-full\" src=\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-3.png\" alt=\"\" width=\"1120\" height=\"318\" srcset=\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-3.png 1120w, https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-3-300x85.png 300w, https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed-3-768x218.png 768w\" sizes=\"(max-width: 1120px) 100vw, 1120px\" \/>\n<p>This demonstrates that with essentially all models, where you can interact with the residuals, you can \u2018jailbreak\u2019 the LLM to give harmful responses without the need for retraining. Whilst evading refusal in this example &#8211; it demonstrates that by directly modifying the models residuals you can influence its outputs in known and specific ways without expensive retraining, thereby making it harder for users to bypass guardrails.<\/p>\n<p>These findings align with existing research that broad concepts such as truth and humour can also be represented as vector in latent space. Although this paper highlights a mechanism to bypass refusal, it\u2019s still a very promising step in identifying the trajectory by which various harmful responses propagate through the LLM. This in turn helps us to build and implement more complex, effective guardrails in LLMs.<\/p>\n<h2>How can guardrails be applied?<\/h2>\n<p>Guardrails can be used in deployed systems to detect when a query is navigating a \u2018harmful\u2019 area of latent space and block the request, rather than relying on fine-tuning steps, which can be overcome with processes like jailbreaking.<\/p>\n<p>Ultimately, understanding how the model activates and navigates its latent space in response to harmful queries will allow us to build-in a native refusal mechanism. This could be achieved by either directly inhibiting that space, or (potentially) through removal of some of those connections, leading to native refusal rather than secondary refusal.<\/p>\n<p>If this were the case, it could directly and immediately impact deployed retrieval augmented generation (RAG) systems where an LLM is given a specific document set and answers questions around this documentation. Jailbreaking this model poses the risk of an LLM generating responses based on specific documentation for questions that it was not designed to answer &#8211; an example being jailbreaking a symptom checker RAG giving a medical diagnosis.<\/p>\n<p>If you were able to alter the model\u2019s internal activation in a specific way, as is highlighted in the research, you could prevent the model being jailbroken and reduce risks not only from refusal but from other ways of using the model inappropriately.<\/p>\n<p><strong>Paper:<\/strong> <a href=\"https:\/\/arxiv.org\/abs\/2406.11717\">https:\/\/arxiv.org\/abs\/2406.11717<\/a><\/p>\n<p><strong>Follow up:<\/strong> <a href=\"https:\/\/www.lesswrong.com\/posts\/jGuXSZgv6qfdhMCuJ\/refusal-in-llms-is-mediated-by-a-single-direction\">https:\/\/www.lesswrong.com\/posts\/jGuXSZgv6qfdhMCuJ\/refusal-in-llms-is-mediated-by-a-single-direction<\/a><\/p>\n<p><strong>If you\u2019re interested in exploring Gen AI in your organisation,<\/strong> <a href=\"https:\/\/www.equalexperts.com\/contact-us\/\" target=\"_blank\" rel=\"noopener\">contact Equal Experts here.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>To prevent the generation of harmful responses, most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019. Large language models (LLMs) like ChatGPT are trained to give us helpful answers while refusing to provide harmful responses. That\u2019s important because when you\u2019re providing information based on everything [&hellip;]<\/p>\n","protected":false},"author":136,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","inline_featured_image":false,"footnotes":""},"categories":[806],"tags":[641,643,640,426,639,642,644],"location":[397],"class_list":["post-14692","post","type-post","status-publish","format-standard","hentry","category-data-ai","tag-ai-ethics","tag-ai-research","tag-ai-safety","tag-generative-ai","tag-large","tag-model-security","tag-prompt-injection"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.9 (Yoast SEO v25.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Safer LLM responses using build-in guardrails | Equal Experts<\/title>\n<meta name=\"description\" content=\"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/\" \/>\n<meta property=\"og:locale\" content=\"en_GB\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Safer LLM responses using build-in guardrails | Equal Experts\" \/>\n<meta property=\"og:description\" content=\"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/\" \/>\n<meta property=\"og:site_name\" content=\"Equal Experts\" \/>\n<meta property=\"article:published_time\" content=\"2024-08-28T09:15:59+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-03-27T12:54:40+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/B-Blog-Image-TwitterLinkedIn-1024px-x-512px-34.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Adam Fletcher\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"Safer LLM responses using build-in guardrails | Equal Experts\" \/>\n<meta name=\"twitter:description\" content=\"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.\" \/>\n<meta name=\"twitter:creator\" content=\"@EqualExperts\" \/>\n<meta name=\"twitter:site\" content=\"@EqualExperts\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Adam Fletcher\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimated reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/\"},\"author\":{\"name\":\"Adam Fletcher\",\"@id\":\"https:\/\/www.equalexperts.com\/#\/schema\/person\/4e22304ec595f0420ff799ffc125e905\"},\"headline\":\"Safer LLM responses using build-in guardrails\",\"datePublished\":\"2024-08-28T09:15:59+00:00\",\"dateModified\":\"2025-03-27T12:54:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/\"},\"wordCount\":831,\"publisher\":{\"@id\":\"https:\/\/www.equalexperts.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png\",\"keywords\":[\"AI ethics\",\"AI research\",\"AI safety\",\"Generative AI\",\"Large\",\"Model security\",\"Prompt injection\"],\"articleSection\":[\"Data &amp; AI\"],\"inLanguage\":\"en-GB\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/\",\"url\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/\",\"name\":\"Safer LLM responses using build-in guardrails | Equal Experts\",\"isPartOf\":{\"@id\":\"https:\/\/www.equalexperts.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png\",\"datePublished\":\"2024-08-28T09:15:59+00:00\",\"dateModified\":\"2025-03-27T12:54:40+00:00\",\"description\":\"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#breadcrumb\"},\"inLanguage\":\"en-GB\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage\",\"url\":\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png\",\"contentUrl\":\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png\",\"width\":1194,\"height\":476},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.equalexperts.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Safer LLM responses using build-in guardrails\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.equalexperts.com\/#website\",\"url\":\"https:\/\/www.equalexperts.com\/\",\"name\":\"Equal Experts\",\"description\":\"Making Software. Better.\",\"publisher\":{\"@id\":\"https:\/\/www.equalexperts.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.equalexperts.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-GB\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.equalexperts.com\/#organization\",\"name\":\"Equal Experts\",\"url\":\"https:\/\/www.equalexperts.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/www.equalexperts.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2018\/08\/Equal_Experts_Logo_CMYK_Colour.jpg\",\"contentUrl\":\"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2018\/08\/Equal_Experts_Logo_CMYK_Colour.jpg\",\"width\":719,\"height\":340,\"caption\":\"Equal Experts\"},\"image\":{\"@id\":\"https:\/\/www.equalexperts.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/EqualExperts\",\"https:\/\/www.linkedin.com\/company\/equal-experts\/?viewAsMember=true\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.equalexperts.com\/#\/schema\/person\/4e22304ec595f0420ff799ffc125e905\",\"name\":\"Adam Fletcher\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/www.equalexperts.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ee937d77444f927bf102ec2de5f2a5a6ffa61d098a6928c4498c43b5decdeb28?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ee937d77444f927bf102ec2de5f2a5a6ffa61d098a6928c4498c43b5decdeb28?s=96&d=mm&r=g\",\"caption\":\"Adam Fletcher\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Safer LLM responses using build-in guardrails | Equal Experts","description":"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/","og_locale":"en_GB","og_type":"article","og_title":"Safer LLM responses using build-in guardrails | Equal Experts","og_description":"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.","og_url":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/","og_site_name":"Equal Experts","article_published_time":"2024-08-28T09:15:59+00:00","article_modified_time":"2025-03-27T12:54:40+00:00","og_image":[{"width":1024,"height":512,"url":"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/B-Blog-Image-TwitterLinkedIn-1024px-x-512px-34.png","type":"image\/png"}],"author":"Adam Fletcher","twitter_card":"summary_large_image","twitter_title":"Safer LLM responses using build-in guardrails | Equal Experts","twitter_description":"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.","twitter_creator":"@EqualExperts","twitter_site":"@EqualExperts","twitter_misc":{"Written by":"Adam Fletcher","Estimated reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#article","isPartOf":{"@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/"},"author":{"name":"Adam Fletcher","@id":"https:\/\/www.equalexperts.com\/#\/schema\/person\/4e22304ec595f0420ff799ffc125e905"},"headline":"Safer LLM responses using build-in guardrails","datePublished":"2024-08-28T09:15:59+00:00","dateModified":"2025-03-27T12:54:40+00:00","mainEntityOfPage":{"@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/"},"wordCount":831,"publisher":{"@id":"https:\/\/www.equalexperts.com\/#organization"},"image":{"@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage"},"thumbnailUrl":"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png","keywords":["AI ethics","AI research","AI safety","Generative AI","Large","Model security","Prompt injection"],"articleSection":["Data &amp; AI"],"inLanguage":"en-GB"},{"@type":"WebPage","@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/","url":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/","name":"Safer LLM responses using build-in guardrails | Equal Experts","isPartOf":{"@id":"https:\/\/www.equalexperts.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage"},"image":{"@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage"},"thumbnailUrl":"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png","datePublished":"2024-08-28T09:15:59+00:00","dateModified":"2025-03-27T12:54:40+00:00","description":"Most generative AI uses something called \u2018guardrails.\u2019 In this article, we\u2019ll look at one example of guardrails, called \u2018refusal\u2019.","breadcrumb":{"@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#breadcrumb"},"inLanguage":"en-GB","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/"]}]},{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#primaryimage","url":"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png","contentUrl":"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2024\/08\/unnamed.png","width":1194,"height":476},{"@type":"BreadcrumbList","@id":"https:\/\/www.equalexperts.com\/blog\/data-ai\/safer-llm-responses-using-build-in-guardrails\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.equalexperts.com\/"},{"@type":"ListItem","position":2,"name":"Safer LLM responses using build-in guardrails"}]},{"@type":"WebSite","@id":"https:\/\/www.equalexperts.com\/#website","url":"https:\/\/www.equalexperts.com\/","name":"Equal Experts","description":"Making Software. Better.","publisher":{"@id":"https:\/\/www.equalexperts.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.equalexperts.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-GB"},{"@type":"Organization","@id":"https:\/\/www.equalexperts.com\/#organization","name":"Equal Experts","url":"https:\/\/www.equalexperts.com\/","logo":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/www.equalexperts.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2018\/08\/Equal_Experts_Logo_CMYK_Colour.jpg","contentUrl":"https:\/\/www.equalexperts.com\/wp-content\/uploads\/2018\/08\/Equal_Experts_Logo_CMYK_Colour.jpg","width":719,"height":340,"caption":"Equal Experts"},"image":{"@id":"https:\/\/www.equalexperts.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/EqualExperts","https:\/\/www.linkedin.com\/company\/equal-experts\/?viewAsMember=true"]},{"@type":"Person","@id":"https:\/\/www.equalexperts.com\/#\/schema\/person\/4e22304ec595f0420ff799ffc125e905","name":"Adam Fletcher","image":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/www.equalexperts.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ee937d77444f927bf102ec2de5f2a5a6ffa61d098a6928c4498c43b5decdeb28?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ee937d77444f927bf102ec2de5f2a5a6ffa61d098a6928c4498c43b5decdeb28?s=96&d=mm&r=g","caption":"Adam Fletcher"}}]}},"_links":{"self":[{"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/posts\/14692","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/users\/136"}],"replies":[{"embeddable":true,"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/comments?post=14692"}],"version-history":[{"count":0,"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/posts\/14692\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/media?parent=14692"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/categories?post=14692"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/tags?post=14692"},{"taxonomy":"location","embeddable":true,"href":"https:\/\/www.equalexperts.com\/wp-json\/wp\/v2\/location?post=14692"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}