Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

33%
+2 −6
Q&A How do I use an existing AI model to classify pornographic images? [closed]

In the last few months, AI has advanced considerably, notably in the area of generating images. We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generati...

0 answers  ·  posted 1y ago by matthewsnyder‭  ·  closed 1y ago by Alexei‭

#6: Question closed by user avatar Alexei‭ · 2023-07-27T08:14:10Z (over 1 year ago)
#5: Post edited by user avatar tripleee‭ · 2023-07-18T15:40:31Z (over 1 year ago)
Typo in title
  • How do I use an existing AI model classify pornographic images?
  • How do I use an existing AI model to classify pornographic images?
  • In the last few months, AI has advanced considerably, notably in the area of generating images: We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generating an image based on a text prompt.
  • Can any existing models be used as a simple binary classifier of porn/not porn for images? How do I do this?
  • I'm asking for a high level description of how to set up such a classifier. I can read the relevant model/API docs myself, but I would like some pointers and a "big picture" explanation.
  • * An okay solution would have 65% precision and recall rate.
  • * A good solution should have 95% precision and recall rate.
  • * A great solution should have 99% precision and recall rate.
  • Note that these are rough guidelines, I'm not going to actually benchmark your solution and split hairs over exact accuracy. I'm just trying to explain what sort of ballpark performance I expect. I want to use this to filter out and remove porn from various content streams, so I need something that has a chance of working reasonably well, not just a proof of concept.
  • The definition of porn is not critical. So long as things that are obviously porn (full nudity) get a positive, and things that are obviously not (cat eating watermelon) get a negative, I'm not too worried about borderline cases (sculptures, suggestive imagery, pareidolia, etc I don't really care which way it's classified).
  • Ideally I would like to avoid developing full image recognition models of my own, since it's a lot of work for an individual. So no building large training sets, training models, tuning them, etc. I am hoping the existing state of the art models are already trained enough to distinguish porn from not porn.
  • I am willing to develop a small model if necessary - a sort of minimal layer boosting off of a "real" image model, that merely classifies the output of that model into my binary classes rather than learning to actually interpret images. I'm thinking here something very simple, like a basic decision tree, with 100s or even 10s of training data (generated manually). The real work of interpreting images should still be done by an off the shelf existing model.
  • In the last few months, AI has advanced considerably, notably in the area of generating images. We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generating an image based on a text prompt.
  • Can any existing models be used as a simple binary classifier of porn/not porn for images? How do I do this?
  • I'm asking for a high level description of how to set up such a classifier. I can read the relevant model/API docs myself, but I would like some pointers and a "big picture" explanation.
  • * An okay solution would have 65% precision and recall rate.
  • * A good solution should have 95% precision and recall rate.
  • * A great solution should have 99% precision and recall rate.
  • Note that these are rough guidelines, I'm not going to actually benchmark your solution and split hairs over exact accuracy. I'm just trying to explain what sort of ballpark performance I expect. I want to use this to filter out and remove porn from various content streams, so I need something that has a chance of working reasonably well, not just a proof of concept.
  • The definition of porn is not critical. So long as things that are obviously porn (full nudity) get a positive, and things that are obviously not (cat eating watermelon) get a negative, I'm not too worried about borderline cases (sculptures, suggestive imagery, pareidolia, etc I don't really care which way it's classified).
  • Ideally I would like to avoid developing full image recognition models of my own, since it's a lot of work for an individual. So no building large training sets, training models, tuning them, etc. I am hoping the existing state of the art models are already trained enough to distinguish porn from not porn.
  • I am willing to develop a small model if necessary - a sort of minimal layer boosting off of a "real" image model, that merely classifies the output of that model into my binary classes rather than learning to actually interpret images. I'm thinking here something very simple, like a basic decision tree, with 100s or even 10s of training data (generated manually). The real work of interpreting images should still be done by an existing off the shelf model.
#4: Post edited by user avatar matthewsnyder‭ · 2023-07-17T20:08:49Z (over 1 year ago)
  • How do I train a simple image classifier to detect pronography using existing AI models?
  • How do I use an existing AI model classify pornographic images?
#3: Post edited by user avatar matthewsnyder‭ · 2023-07-17T17:31:39Z (over 1 year ago)
  • In the last few months, AI has advanced considerably, notably in the area of generating images: We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generating an image based on a text prompt.
  • Can any existing models be used as a simple binary classifier of porn/not porn for images? How do I do this?
  • I'm asking for a high level description of how to set up such a classifier. I can read the relevant model/API docs myself, but I would like some pointers and a "big picture" explanation.
  • * An okay solution would have 65% precision and recall rate.
  • * A good solution should have 95% precision and recall rate.
  • * A great solution should have 99% precision and recall rate.
  • Note that these are rough guidelines, I'm not going to actually benchmark your solution and split hairs over exact accuracy. I'm just trying to explain what sort of ballpark performance I expect. I want to use this to filter out and remove porn from various content streams, so I need something that has a chance of working reasonably well, not just a proof of concept.
  • The definition of porn is not critical. So long as things that are obviously porn (full nudity) get a positive, and things that are obviously not (cat eating watermelon) get a negative, I'm not too worried about borderline cases.
  • Ideally I would like to avoid developing full image recognition models of my own, since it's a lot of work for an individual. So no building large training sets, training models, tuning them, etc. I am hoping the existing state of the art models are already trained enough to distinguish porn from not porn.
  • I am willing to develop a small model if necessary - a sort of minimal layer boosting off of a "real" image model, that merely classifies the output of that model into my binary classes rather than learning to actually interpret images. I'm thinking here something very simple, like a basic decision tree, with 100s or even 10s of training data (generated manually). The real work of interpreting images should still be done by an off the shelf existing model.
  • In the last few months, AI has advanced considerably, notably in the area of generating images: We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generating an image based on a text prompt.
  • Can any existing models be used as a simple binary classifier of porn/not porn for images? How do I do this?
  • I'm asking for a high level description of how to set up such a classifier. I can read the relevant model/API docs myself, but I would like some pointers and a "big picture" explanation.
  • * An okay solution would have 65% precision and recall rate.
  • * A good solution should have 95% precision and recall rate.
  • * A great solution should have 99% precision and recall rate.
  • Note that these are rough guidelines, I'm not going to actually benchmark your solution and split hairs over exact accuracy. I'm just trying to explain what sort of ballpark performance I expect. I want to use this to filter out and remove porn from various content streams, so I need something that has a chance of working reasonably well, not just a proof of concept.
  • The definition of porn is not critical. So long as things that are obviously porn (full nudity) get a positive, and things that are obviously not (cat eating watermelon) get a negative, I'm not too worried about borderline cases (sculptures, suggestive imagery, pareidolia, etc I don't really care which way it's classified).
  • Ideally I would like to avoid developing full image recognition models of my own, since it's a lot of work for an individual. So no building large training sets, training models, tuning them, etc. I am hoping the existing state of the art models are already trained enough to distinguish porn from not porn.
  • I am willing to develop a small model if necessary - a sort of minimal layer boosting off of a "real" image model, that merely classifies the output of that model into my binary classes rather than learning to actually interpret images. I'm thinking here something very simple, like a basic decision tree, with 100s or even 10s of training data (generated manually). The real work of interpreting images should still be done by an off the shelf existing model.
#2: Post edited by user avatar matthewsnyder‭ · 2023-07-17T17:28:43Z (over 1 year ago)
  • In the last few months, AI has advanced considerably, notably in the area of generating images: We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generating an image based on a text prompt.
  • Can any existing models be used as a simple binary classifier of porn/not porn for images? How do I do this?
  • I'm asking for a high level description of how to set up such a classifier. I can read the relevant model/API docs myself, but I would like some pointers and a "big picture" explanation.
  • * An okay solution would have 65% precision and recall rate.
  • * A good solution should have 95% precision and recall rate.
  • * A great solution should have 99% precision and recall rate.
  • Note that these are rough guidelines, I'm not going to actually benchmark your solution and split hair over exact accuracy. I'm just trying to explain what sort of ballpark performance I expect. I want to use this to filter out and remove porn from various content streams, so I need something that works reasonably well, not just a proof of concept.
  • The definition of porn is not critical. So long as things that are obviously porn (full nudity) get a positive, and things that are obviously not (cat eating watermelon) get a negative, I'm not too worried about borderline cases.
  • Ideally I would like to avoid developing full image recognition models of my own, since it's a lot of work for an individual. So no building large training sets, training models, tuning them, etc. I am hoping the existing state of the art models are already trained enough to distinguish porn from not porn.
  • I am willing to develop a small model if necessary - a sort of minimal layer boosting off of a "real" image model, that merely classifies the output of that model into my binary classes rather than learning to actually interpret images. I'm thinking here something very simple, like a basic decision tree, with 100s or even 10s of training data (generated manually). The real work of interpreting images should still be done by an off the shelf existing model.
  • In the last few months, AI has advanced considerably, notably in the area of generating images: We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generating an image based on a text prompt.
  • Can any existing models be used as a simple binary classifier of porn/not porn for images? How do I do this?
  • I'm asking for a high level description of how to set up such a classifier. I can read the relevant model/API docs myself, but I would like some pointers and a "big picture" explanation.
  • * An okay solution would have 65% precision and recall rate.
  • * A good solution should have 95% precision and recall rate.
  • * A great solution should have 99% precision and recall rate.
  • Note that these are rough guidelines, I'm not going to actually benchmark your solution and split hairs over exact accuracy. I'm just trying to explain what sort of ballpark performance I expect. I want to use this to filter out and remove porn from various content streams, so I need something that has a chance of working reasonably well, not just a proof of concept.
  • The definition of porn is not critical. So long as things that are obviously porn (full nudity) get a positive, and things that are obviously not (cat eating watermelon) get a negative, I'm not too worried about borderline cases.
  • Ideally I would like to avoid developing full image recognition models of my own, since it's a lot of work for an individual. So no building large training sets, training models, tuning them, etc. I am hoping the existing state of the art models are already trained enough to distinguish porn from not porn.
  • I am willing to develop a small model if necessary - a sort of minimal layer boosting off of a "real" image model, that merely classifies the output of that model into my binary classes rather than learning to actually interpret images. I'm thinking here something very simple, like a basic decision tree, with 100s or even 10s of training data (generated manually). The real work of interpreting images should still be done by an off the shelf existing model.
#1: Initial revision by user avatar matthewsnyder‭ · 2023-07-17T17:25:48Z (over 1 year ago)
How do I train a simple image classifier to detect pronography using existing AI models?
In the last few months, AI has advanced considerably, notably in the area of generating images: We now have powerful models like DALL-E, Stable Diffusion, etc. These are quite competent at generating an image based on a text prompt.

Can any existing models be used as a simple binary classifier of porn/not porn for images? How do I do this?

I'm asking for a high level description of how to set up such a classifier. I can read the relevant model/API docs myself, but I would like some pointers and a "big picture" explanation.

* An okay solution would have 65% precision and recall rate.
* A good solution should have 95% precision and recall rate.
* A great solution should have 99% precision and recall rate.

Note that these are rough guidelines, I'm not going to actually benchmark your solution and split hair over exact accuracy. I'm just trying to explain what sort of ballpark performance I expect. I want to use this to filter out and remove porn from various content streams, so I need something that works reasonably well, not just a proof of concept.

The definition of porn is not critical. So long as things that are obviously porn (full nudity) get a positive, and things that are obviously not (cat eating watermelon) get a negative, I'm not too worried about borderline cases.

Ideally I would like to avoid developing full image recognition models of my own, since it's a lot of work for an individual. So no building large training sets, training models, tuning them, etc. I am hoping the existing state of the art models are already trained enough to distinguish porn from not porn.

I am willing to develop a small model if necessary - a sort of minimal layer boosting off of a "real" image model, that merely classifies the output of that model into my binary classes rather than learning to actually interpret images. I'm thinking here something very simple, like a basic decision tree, with 100s or even 10s of training data (generated manually). The real work of interpreting images should still be done by an off the shelf existing model.