Hey there, fellow Ruby enthusiast! Ready to add some content moderation superpowers to your app? Let's dive into integrating the Perspective API using the nifty perspective
gem. This powerful tool will help you analyze text for potentially toxic or harmful content, making your platform a safer place for users.
Before we jump in, make sure you've got:
First things first, let's get that gem installed:
gem install perspective
Easy peasy, right?
Now, let's set up those API credentials. Create a new file called perspective_config.rb
:
require 'perspective' Perspective.configure do |config| config.api_key = 'YOUR_API_KEY_HERE' end
Replace 'YOUR_API_KEY_HERE'
with your actual API key. Keep it secret, keep it safe!
Time to put this bad boy to work. Here's how you can make a simple API request:
client = Perspective::Client.new response = client.analyze_comment( comment: "You're awesome!", attributes: [:TOXICITY] ) puts response.summary_scores[:TOXICITY]
Run this, and you'll get a toxicity score for the comment. Neat, huh?
Want to kick it up a notch? Let's explore some advanced features:
response = client.analyze_comment( comment: "You're a genius and I love your work!", attributes: [:TOXICITY, :INSULT, :PROFANITY] ) response.summary_scores.each do |attribute, score| puts "#{attribute}: #{score}" end
response = client.analyze_comment( comment: "Tu es vraiment stupide!", attributes: [:TOXICITY], languages: ['fr'] )
Sometimes things don't go as planned. Here's how to handle common errors:
begin response = client.analyze_comment(comment: "Your comment here", attributes: [:TOXICITY]) rescue Perspective::Error => e puts "Oops! Something went wrong: #{e.message}" end
Remember to play nice with the API:
Let's put it all together with a basic comment moderation system:
require 'perspective' Perspective.configure do |config| config.api_key = 'YOUR_API_KEY_HERE' end def moderate_comment(comment) client = Perspective::Client.new response = client.analyze_comment( comment: comment, attributes: [:TOXICITY, :INSULT, :PROFANITY] ) if response.summary_scores[:TOXICITY] > 0.7 || response.summary_scores[:INSULT] > 0.7 || response.summary_scores[:PROFANITY] > 0.7 puts "Comment rejected: #{comment}" else puts "Comment accepted: #{comment}" end rescue Perspective::Error => e puts "Error analyzing comment: #{e.message}" end # Test it out moderate_comment("You're awesome!") moderate_comment("You're a complete idiot!")
And there you have it! You're now equipped to integrate Perspective API into your Ruby projects. Remember, with great power comes great responsibility – use this tool wisely to create a positive online environment.
Happy coding, and may your comments always be toxicity-free! 🚀✨