It must be tough achieving an AI that functions universally devoid of content censorship especially when the implications and permutations posited by handling unfiltered data are anything but complicated. For instance, major AI platforms like OpenAI's GPT-4 or Google will have hard-coded content moderation policies that disallow certain types of outputs from being generated and published (like hate speech or gore). These are part of the broader industry standard, as nearly 95% of AI tools offer some level of content filtering.
AI needs to be protected from censorship not only as a precaution but also necessity for the security of users and adherence with legal rules. This looks to be workable in theory — OpenAI's GPT-4, presumably features heavy moderation systems that prevent users from surfacing anything explicit or harmful within it (the sort of move modern AI companies engage with heavily to maintain the veneer of their commitment towards ethics). The cost of maintaining AI oversight systems can also get high, with recommendations estimating that costly tasks such as managing and monitoring the content detected by algorithms to detect malevolent disinformation could increase between $200,000-$1m per year for large platforms.
This lack of registration contrasts with the non-cohesive AI platforms — such as that from Uncensored processing, which has less restrictions but are often within niche markets and may carry a cost to be involved. To cover the considerable risks of making unconfined content, these platforms usually operate on subscription bases. In fact, specialized AI chat services that reduce banning can cost anywhere from $50 to a few hundred per month. This pricing model is meant to pay for robust monitoring technologies and legal risks.
Before Habitus, there have been many such efforts to create AGIs – all of them ran into significant problems because the AI was unfettered. In the chatbots scenario, for instance, early versions of AI-equipped devices were blamed in a lot polluting or wrong action triggered by invoking all kinds of unwanted answers which immediately requested more strict blocking mechanisms The use of such systems not only guarantees compliance with ethical guidelines, but also prevents exposure to inappropriate content.
Dr. Alex Mercer, an AI ethics researcher remarks on this by saying “Full uncensored is rare because the risks and costs of moderation are huge in keeping with maintaining forms of speech moderate”. You can have fewer restrictions or more AI usage — responsible use largely depends on the other two factors.)
However there are specialised services that purport to offer less constrained AI experiences, albeit at low resolution or fidelity. Some platforms offer access to less censored AI for those who wish to avail themselves of such options, but rarely will it come entirely free. One such service is porn ai chat which puts very limited restrictions as compared to the regular AI tools.
In other words, though AI that can remove any filter whatsoever sits on top of a wish list for most platforms — especially mainstream ones where they have huge ethical & financial liabilities to consider— there are some specialized services which allows enough leeway due to essentially no content filtration at all. But these services also tend to be quite expensive, which speaks much about the balance between providing a free-flow of resources and handling all threats in good scheme.