Skip to content

fix: use OpenAIGenericClient for non-OpenAI endpoints (Gemini, Ollama, vLLM) #3321

fix: use OpenAIGenericClient for non-OpenAI endpoints (Gemini, Ollama, vLLM)

fix: use OpenAIGenericClient for non-OpenAI endpoints (Gemini, Ollama, vLLM) #3321

Workflow file for this run

name: AI Moderator
on:
issues:
types: [opened]
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
jobs:
spam-detection:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
models: read
contents: read
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: github/ai-moderator@81159c370785e295c97461ade67d7c33576e9319 # v1
with:
token: ${{ secrets.GITHUB_TOKEN }}
spam-label: 'spam'
ai-label: 'ai-generated'
minimize-detected-comments: true
# Built-in prompt configuration (all enabled by default)
enable-spam-detection: true
enable-link-spam-detection: true
enable-ai-detection: true
# custom-prompt-path: '.github/prompts/my-custom.prompt.yml' # Optional