|
1 | 1 | # Assessment Service Configuration |
2 | 2 | assessment: |
3 | | - default_confidence_threshold: "0.9" |
4 | | - top_p: "0.1" |
5 | | - max_tokens: "4096" |
6 | | - top_k: "5" |
7 | | - temperature: "0.0" |
8 | | - model: "us.amazon.nova-pro-v1:0" |
9 | | - system_prompt: "You are a document analysis assessment expert. Your task is to evaluate the confidence and accuracy of extraction results by analyzing the source document evidence. Respond only with JSON containing confidence scores and reasoning for each extracted attribute." |
10 | | - task_prompt: "<background>\nYou are an expert document analysis assessment system. Your task is to evaluate the confidence and accuracy of extraction results for a document of class {DOCUMENT_CLASS}.\n</background>\n\n<task>\nAnalyze the extraction results against the source document and provide confidence assessments for each extracted attribute. Consider factors such as:\n1. Text clarity and OCR quality in the source regions 2. Alignment between extracted values and document content 3. Presence of clear evidence supporting the extraction 4. Potential ambiguity or uncertainty in the source material 5. Completeness and accuracy of the extracted information\n</task>\n\n<assessment-guidelines>\nFor each attribute, provide: 1. A confidence score between 0.0 and 1.0 where:\n - 1.0 = Very high confidence, clear and unambiguous evidence\n - 0.8-0.9 = High confidence, strong evidence with minor uncertainty\n - 0.6-0.7 = Medium confidence, reasonable evidence but some ambiguity\n - 0.4-0.5 = Low confidence, weak or unclear evidence\n - 0.0-0.3 = Very low confidence, little to no supporting evidence\n\n2. A clear reason explaining the confidence score, including:\n - What evidence supports or contradicts the extraction\n - Any OCR quality issues that affect confidence\n - Clarity of the source document in relevant areas\n - Any ambiguity or uncertainty factors\n\nGuidelines: - Base assessments on actual document content and OCR quality - Consider both text-based evidence and visual/layout clues - Account for OCR confidence scores when provided - Be objective and specific in reasoning - If an extraction appears incorrect, score accordingly with explanation\n</assessment-guidelines>\n<attributes-definitions>\n{ATTRIBUTE_NAMES_AND_DESCRIPTIONS}\n</attributes-definitions>\n\n<<CACHEPOINT>>\n\n<extraction-results>\n{EXTRACTION_RESULTS}\n</extraction-results>\n\n<document-image>\n{DOCUMENT_IMAGE}\n</document-image>\n\n<ocr-text-confidence-results>\n{OCR_TEXT_CONFIDENCE}\n</ocr-text-confidence-results>\n\n<final-instructions>\nAnalyze the extraction results against the source document and provide confidence assessments. Return a JSON object with the following structure:\n\n {\n \"attribute_name_1\": {\n \"confidence_score\": 0.85,\n \"confidence_reason\": \"Clear text evidence found in document header with high OCR confidence (0.98). Value matches exactly.\"\n },\n \"attribute_name_2\": {\n \"confidence_score\": 0.65,\n \"confidence_reason\": \"Text is partially unclear due to poor scan quality. OCR confidence low (0.72) in this region.\"\n }\n }\n\nInclude assessments for ALL attributes present in the extraction results.\n</final-instructions>" |
| 3 | + default_confidence_threshold: '0.9' |
| 4 | + top_p: '0.1' |
| 5 | + max_tokens: '10000' |
| 6 | + top_k: '5' |
| 7 | + temperature: '0.0' |
| 8 | + model: us.anthropic.claude-3-7-sonnet-20250219-v1:0 |
| 9 | + system_prompt: >- |
| 10 | + You are a document analysis assessment expert. Your task is to evaluate the confidence of extraction results by analyzing the source document evidence. Respond only with JSON containing confidence scores for each extracted attribute. |
| 11 | + task_prompt: >- |
| 12 | + <background> |
| 13 | +
|
| 14 | + You are an expert document analysis assessment system. Your task is to evaluate the confidence of extraction results for a document of class {DOCUMENT_CLASS}. |
| 15 | +
|
| 16 | + </background> |
| 17 | +
|
| 18 | +
|
| 19 | + <task> |
| 20 | +
|
| 21 | + Analyze the extraction results against the source document and provide confidence assessments for each extracted attribute. Consider factors such as: |
| 22 | +
|
| 23 | + 1. Text clarity and OCR quality in the source regions |
| 24 | + 2. Alignment between extracted values and document content |
| 25 | + 3. Presence of clear evidence supporting the extraction |
| 26 | + 4. Potential ambiguity or uncertainty in the source material |
| 27 | + 5. Completeness and accuracy of the extracted information |
| 28 | +
|
| 29 | + </task> |
| 30 | +
|
| 31 | +
|
| 32 | + <assessment-guidelines> |
| 33 | +
|
| 34 | + For each attribute, provide: |
| 35 | + A confidence score between 0.0 and 1.0 where: |
| 36 | + - 1.0 = Very high confidence, clear and unambiguous evidence |
| 37 | + - 0.8-0.9 = High confidence, strong evidence with minor uncertainty |
| 38 | + - 0.6-0.7 = Medium confidence, reasonable evidence but some ambiguity |
| 39 | + - 0.4-0.5 = Low confidence, weak or unclear evidence |
| 40 | + - 0.0-0.3 = Very low confidence, little to no supporting evidence |
| 41 | +
|
| 42 | + Guidelines: |
| 43 | + - Base assessments on actual document content and OCR quality |
| 44 | + - Consider both text-based evidence and visual/layout clues |
| 45 | + - Account for OCR confidence scores when provided |
| 46 | + - Be objective and specific in reasoning |
| 47 | + - If an extraction appears incorrect, score accordingly with explanation |
| 48 | +
|
| 49 | + </assessment-guidelines> |
| 50 | +
|
| 51 | + <attributes-definitions> |
| 52 | +
|
| 53 | + {ATTRIBUTE_NAMES_AND_DESCRIPTIONS} |
| 54 | +
|
| 55 | + </attributes-definitions> |
| 56 | +
|
| 57 | +
|
| 58 | + <<CACHEPOINT>> |
| 59 | +
|
| 60 | +
|
| 61 | + <extraction-results> |
| 62 | +
|
| 63 | + {EXTRACTION_RESULTS} |
| 64 | +
|
| 65 | + </extraction-results> |
| 66 | +
|
| 67 | +
|
| 68 | + <document-image> |
| 69 | +
|
| 70 | + {DOCUMENT_IMAGE} |
| 71 | +
|
| 72 | + </document-image> |
| 73 | +
|
| 74 | +
|
| 75 | + <ocr-text-confidence-results> |
| 76 | +
|
| 77 | + {OCR_TEXT_CONFIDENCE} |
| 78 | +
|
| 79 | + </ocr-text-confidence-results> |
| 80 | +
|
| 81 | +
|
| 82 | + <final-instructions> |
| 83 | +
|
| 84 | + Analyze the extraction results against the source document and provide confidence assessments. Return a JSON object with the following structure based on the attribute type: |
| 85 | +
|
| 86 | + For SIMPLE attributes: |
| 87 | + { |
| 88 | + "simple_attribute_name": { |
| 89 | + "confidence": 0.85 |
| 90 | + } |
| 91 | + } |
| 92 | +
|
| 93 | + For GROUP attributes (nested object structure): |
| 94 | + { |
| 95 | + "group_attribute_name": { |
| 96 | + "sub_attribute_1": { |
| 97 | + "confidence": 0.90 |
| 98 | + }, |
| 99 | + "sub_attribute_2": { |
| 100 | + "confidence": 0.75 |
| 101 | + } |
| 102 | + } |
| 103 | + } |
| 104 | +
|
| 105 | + For LIST attributes (array of assessed items): |
| 106 | + { |
| 107 | + "list_attribute_name": [ |
| 108 | + { |
| 109 | + "item_attribute_1": { |
| 110 | + "confidence": 0.95 |
| 111 | + }, |
| 112 | + "item_attribute_2": { |
| 113 | + "confidence": 0.88 |
| 114 | + } |
| 115 | + }, |
| 116 | + { |
| 117 | + "item_attribute_1": { |
| 118 | + "confidence": 0.92 |
| 119 | + }, |
| 120 | + "item_attribute_2": { |
| 121 | + "confidence": 0.70 |
| 122 | + } |
| 123 | + } |
| 124 | + ] |
| 125 | + } |
| 126 | +
|
| 127 | + IMPORTANT: |
| 128 | + - For LIST attributes like "Transactions", assess EACH individual item in the list separately |
| 129 | + - Each transaction should be assessed as a separate object in the array |
| 130 | + - Do NOT provide aggregate assessments for list items - assess each one individually |
| 131 | + - Include assessments for ALL attributes present in the extraction results |
| 132 | + - Match the exact structure of the extracted data |
| 133 | +
|
| 134 | + </final-instructions> |
0 commit comments