Skip to content

第32天:Skills最佳实践

学习目标

  • 掌握Skills命名规范
  • 学会编写高质量的skill.md
  • 理解Skills设计原则
  • 掌握错误处理最佳实践
  • 学会性能优化技巧
  • 了解测试和调试方法

核心内容

命名规范

Skill名称规范

基本规则

  • 使用小写字母
  • 使用连字符(-)分隔单词
  • 避免使用特殊字符
  • 保持简洁但描述性
  • 使用动词或名词短语

好的例子

text-analyzer
file-reader
data-processor
web-scraper
image-generator

不好的例子

TextAnalyzer (使用驼峰命名)
text_analyzer (使用下划线)
text analyzer (包含空格)
text@analyzer (包含特殊字符)
txt-anlzr (过度缩写)

函数命名规范

基本规则

  • 使用小写字母和下划线
  • 使用动词开头
  • 描述函数的功能
  • 保持一致性

好的例子

python
def analyze_text(text: str) -> Dict:
    pass

def read_file(file_path: str) -> str:
    pass

def process_data(data: Dict) -> Dict:
    pass

参数命名规范

基本规则

  • 使用小写字母和下划线
  • 描述参数的用途
  • 避免缩写
  • 保持一致性

好的例子

python
def analyze_text(
    text: str,
    analysis_type: str,
    include_scores: bool = False
) -> Dict:
    pass

文档编写规范

skill.md结构规范

标准结构

yaml
---
name: "skill-name"
version: "1.0.0"
author: "Author Name <email@example.com>"
description: "Brief description of the skill"
tags: ["tag1", "tag2", "tag3"]
license: "MIT"
python_requires: ">=3.8"
dependencies:
  - package1>=1.0.0
  - package2>=2.0.0
---

# Skill Name

## Description

Detailed description of what the skill does.

## Features

- Feature 1
- Feature 2
- Feature 3

## Use Cases

- Use case 1
- Use case 2

## Parameters

| Parameter | Type | Required | Description | Default |
|-----------|------|----------|-------------|---------|
| param1 | string | Yes | Description | - |
| param2 | integer | No | Description | 10 |

## Returns

Description of return value.

## Examples

### Example 1

```python
# Code example

Limitations

List of limitations.

Changelog

Version 1.0.0

  • Initial release

#### 描述编写规范

**好的描述**:

```yaml
description: "Analyze text sentiment and extract keywords using NLP techniques"

不好的描述

yaml
description: "Text analysis"
description: "This skill analyzes text"
description: "A skill for text analysis"

参数文档规范

好的参数文档

ParameterTypeRequiredDescriptionDefault
textstringYesThe text to analyze. Must be between 1 and 10000 characters.-
analysis_typestringYesType of analysis: 'sentiment', 'keywords', or 'summary'.-
include_scoresbooleanNoWhether to include confidence scores in the result.false

不好的参数文档

ParameterTypeRequiredDescriptionDefault
textstringYesText-
typestringYesType-
scoresbooleanNoScoresfalse

设计原则

单一职责原则

每个Skill应该只做一件事,并且做好这件事。

好的设计

python
class TextAnalyzer:
    def analyze_sentiment(self, text: str) -> Dict:
        pass

class KeywordExtractor:
    def extract_keywords(self, text: str) -> List[str]:
        pass

class TextSummarizer:
    def summarize(self, text: str) -> str:
        pass

不好的设计

python
class TextProcessor:
    def process(self, text: str, operation: str) -> Dict:
        if operation == "sentiment":
            return self.analyze_sentiment(text)
        elif operation == "keywords":
            return self.extract_keywords(text)
        elif operation == "summary":
            return self.summarize(text)

开闭原则

Skill应该对扩展开放,对修改关闭。

好的设计

python
class TextAnalyzer:
    def __init__(self):
        self.analyzers = {}
    
    def register_analyzer(self, name: str, analyzer: Callable):
        self.analyzers[name] = analyzer
    
    def analyze(self, text: str, analyzer_name: str) -> Dict:
        analyzer = self.analyzers.get(analyzer_name)
        if not analyzer:
            raise ValueError(f"Analyzer {analyzer_name} not found")
        return analyzer(text)

依赖倒置原则

Skill应该依赖抽象,而不是具体实现。

好的设计

python
from abc import ABC, abstractmethod

class TextAnalyzer(ABC):
    @abstractmethod
    def analyze(self, text: str) -> Dict:
        pass

class SentimentAnalyzer(TextAnalyzer):
    def analyze(self, text: str) -> Dict:
        pass

class KeywordExtractor(TextAnalyzer):
    def analyze(self, text: str) -> Dict:
        pass

错误处理最佳实践

参数验证

好的做法

python
from pydantic import BaseModel, Field, validator

class SkillParameters(BaseModel):
    text: str = Field(..., min_length=1, max_length=10000)
    analysis_type: str = Field(..., regex="^(sentiment|keywords|summary)$")
    
    @validator('text')
    def validate_text(cls, v):
        if not v.strip():
            raise ValueError('Text cannot be empty or whitespace only')
        return v
    
    @validator('analysis_type')
    def validate_analysis_type(cls, v):
        valid_types = ['sentiment', 'keywords', 'summary']
        if v not in valid_types:
            raise ValueError(f'Invalid analysis type. Must be one of: {valid_types}')
        return v

异常处理

好的做法

python
from typing import Dict, Any

class TextAnalyzer:
    def execute(self, params: SkillParameters) -> Dict[str, Any]:
        try:
            result = self._analyze(params)
            
            return {
                "success": True,
                "data": result,
                "metadata": {
                    "text_length": len(params.text),
                    "analysis_type": params.analysis_type
                },
                "error": None
            }
        except ValueError as e:
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": f"Validation error: {str(e)}"
            }
        except Exception as e:
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": f"Internal error: {str(e)}"
            }
    
    def _analyze(self, params: SkillParameters) -> Dict:
        if params.analysis_type == "sentiment":
            return self._analyze_sentiment(params.text)
        elif params.analysis_type == "keywords":
            return self._extract_keywords(params.text)
        elif params.analysis_type == "summary":
            return self._summarize(params.text)
        else:
            raise ValueError(f"Unknown analysis type: {params.analysis_type}")

错误消息规范

好的错误消息

python
# 具体且有帮助
raise ValueError("Text length must be between 1 and 10000 characters, got 15000")

# 包含上下文
raise ValueError(f"Invalid analysis type '{analysis_type}'. Must be one of: sentiment, keywords, summary")

# 提供解决方案
raise ValueError("API key not found. Please set the API_KEY environment variable or pass it as a parameter.")

不好的错误消息

python
# 太模糊
raise ValueError("Invalid input")

# 没有帮助
raise ValueError("Error")

# 缺少上下文
raise ValueError("Failed")

性能优化

缓存策略

好的做法

python
from functools import lru_cache
from typing import Dict

class TextAnalyzer:
    def __init__(self):
        self._cache = {}
    
    def analyze(self, text: str, use_cache: bool = True) -> Dict:
        cache_key = self._generate_cache_key(text)
        
        if use_cache and cache_key in self._cache:
            return self._cache[cache_key]
        
        result = self._perform_analysis(text)
        
        if use_cache:
            self._cache[cache_key] = result
        
        return result
    
    def _generate_cache_key(self, text: str) -> str:
        import hashlib
        return hashlib.md5(text.encode()).hexdigest()
    
    def clear_cache(self):
        self._cache.clear()

批处理优化

好的做法

python
from typing import List, Dict

class TextAnalyzer:
    def analyze_batch(self, texts: List[str]) -> List[Dict]:
        results = []
        
        for text in texts:
            try:
                result = self.analyze(text)
                results.append(result)
            except Exception as e:
                results.append({
                    "success": False,
                    "error": str(e)
                })
        
        return results
    
    def analyze_batch_parallel(self, texts: List[str], max_workers: int = 4) -> List[Dict]:
        from concurrent.futures import ThreadPoolExecutor, as_completed
        
        results = [None] * len(texts)
        
        with ThreadPoolExecutor(max_workers=max_workers) as executor:
            future_to_index = {
                executor.submit(self.analyze, text): index
                for index, text in enumerate(texts)
            }
            
            for future in as_completed(future_to_index):
                index = future_to_index[future]
                try:
                    results[index] = future.result()
                except Exception as e:
                    results[index] = {
                        "success": False,
                        "error": str(e)
                    }
        
        return results

内存优化

好的做法

python
from typing import Iterator, Dict

class TextAnalyzer:
    def analyze_stream(self, text_stream: Iterator[str]) -> Iterator[Dict]:
        for text in text_stream:
            try:
                result = self.analyze(text)
                yield result
            except Exception as e:
                yield {
                    "success": False,
                    "error": str(e)
                }
    
    def analyze_large_file(self, file_path: str, chunk_size: int = 10000) -> Iterator[Dict]:
        with open(file_path, 'r') as f:
            buffer = ""
            for line in f:
                buffer += line
                if len(buffer) >= chunk_size:
                    yield self.analyze(buffer)
                    buffer = ""
            
            if buffer:
                yield self.analyze(buffer)

测试最佳实践

单元测试

好的做法

python
import unittest
from text_analyzer import TextAnalyzer, SkillParameters

class TestTextAnalyzer(unittest.TestCase):
    def setUp(self):
        self.analyzer = TextAnalyzer()
    
    def test_sentiment_analysis_positive(self):
        params = SkillParameters(
            text="This is a great product!",
            analysis_type="sentiment"
        )
        result = self.analyzer.execute(params)
        
        self.assertTrue(result["success"])
        self.assertIsNotNone(result["data"])
        self.assertEqual(result["data"]["sentiment"], "positive")
    
    def test_sentiment_analysis_negative(self):
        params = SkillParameters(
            text="This is terrible!",
            analysis_type="sentiment"
        )
        result = self.analyzer.execute(params)
        
        self.assertTrue(result["success"])
        self.assertEqual(result["data"]["sentiment"], "negative")
    
    def test_invalid_analysis_type(self):
        params = SkillParameters(
            text="Test text",
            analysis_type="invalid"
        )
        result = self.analyzer.execute(params)
        
        self.assertFalse(result["success"])
        self.assertIsNotNone(result["error"])
    
    def test_empty_text(self):
        with self.assertRaises(ValueError):
            SkillParameters(
                text="",
                analysis_type="sentiment"
            )
    
    def test_text_too_long(self):
        with self.assertRaises(ValueError):
            SkillParameters(
                text="a" * 10001,
                analysis_type="sentiment"
            )

if __name__ == "__main__":
    unittest.main()

集成测试

好的做法

python
import unittest
from skill_management_system import SkillManagementSystem

class TestSkillIntegration(unittest.TestCase):
    def setUp(self):
        self.sms = SkillManagementSystem("./test_skills")
        self.sms.initialize()
    
    def tearDown(self):
        import shutil
        shutil.rmtree("./test_skills")
    
    def test_skill_discovery(self):
        skills = self.sms.list_skills()
        self.assertGreater(len(skills), 0)
    
    def test_skill_loading(self):
        skills = self.sms.list_skills()
        for skill in skills:
            success = self.sms.load_skill(skill['name'])
            self.assertTrue(success)
    
    def test_skill_execution(self):
        self.sms.load_skill("text-analyzer")
        
        result = self.sms.execute_skill("text-analyzer", {
            "text": "Test text",
            "analysis_type": "sentiment"
        })
        
        self.assertTrue(result["success"])
    
    def test_skill_unloading(self):
        self.sms.load_skill("text-analyzer")
        success = self.sms.unload_skill("text-analyzer")
        self.assertTrue(success)
    
    def test_skill_reloading(self):
        self.sms.load_skill("text-analyzer")
        success = self.sms.reload_skill("text-analyzer")
        self.assertTrue(success)

if __name__ == "__main__":
    unittest.main()

调试技巧

日志记录

好的做法

python
import logging
from typing import Dict, Any

class TextAnalyzer:
    def __init__(self, log_level: str = "INFO"):
        self.logger = logging.getLogger(__name__)
        self.logger.setLevel(getattr(logging, log_level))
        
        handler = logging.StreamHandler()
        formatter = logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        )
        handler.setFormatter(formatter)
        self.logger.addHandler(handler)
    
    def execute(self, params: SkillParameters) -> Dict[str, Any]:
        self.logger.info(f"Starting analysis with type: {params.analysis_type}")
        self.logger.debug(f"Text length: {len(params.text)}")
        
        try:
            result = self._analyze(params)
            
            self.logger.info("Analysis completed successfully")
            return {
                "success": True,
                "data": result,
                "metadata": {},
                "error": None
            }
        except Exception as e:
            self.logger.error(f"Analysis failed: {str(e)}", exc_info=True)
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": str(e)
            }

性能分析

好的做法

python
import time
from typing import Dict, Any, Callable
from functools import wraps

def measure_time(func: Callable) -> Callable:
    @wraps(func)
    def wrapper(*args, **kwargs):
        start_time = time.time()
        result = func(*args, **kwargs)
        end_time = time.time()
        
        execution_time = end_time - start_time
        print(f"{func.__name__} executed in {execution_time:.4f} seconds")
        
        return result
    return wrapper

class TextAnalyzer:
    @measure_time
    def analyze(self, text: str) -> Dict:
        return self._perform_analysis(text)

调试工具

好的做法

python
from typing import Dict, Any

class DebugTextAnalyzer(TextAnalyzer):
    def __init__(self):
        super().__init__()
        self.debug_mode = False
    
    def enable_debug(self):
        self.debug_mode = True
    
    def disable_debug(self):
        self.debug_mode = False
    
    def execute(self, params: SkillParameters) -> Dict[str, Any]:
        if self.debug_mode:
            print(f"DEBUG: Input parameters: {params}")
            print(f"DEBUG: Text length: {len(params.text)}")
            print(f"DEBUG: Analysis type: {params.analysis_type}")
        
        result = super().execute(params)
        
        if self.debug_mode:
            print(f"DEBUG: Result: {result}")
        
        return result

实践任务

任务1:重构现有Skill

选择一个现有的Skill,按照最佳实践进行重构。

步骤

  1. 分析现有Skill的问题
  2. 应用命名规范
  3. 改进文档
  4. 优化错误处理
  5. 添加测试
  6. 性能优化

输出

  • 重构后的Skill
  • 改进说明
  • 测试代码

任务2:编写高质量Skill

编写一个高质量的Skill,遵循所有最佳实践。

步骤

  1. 设计Skill功能
  2. 编写skill.md
  3. 实现Skill
  4. 添加错误处理
  5. 编写测试
  6. 性能优化
  7. 文档完善

输出

  • 完整的Skill
  • 测试代码
  • 使用文档

任务3:创建Skill模板

创建一个Skill开发模板,包含所有最佳实践。

步骤

  1. 设计模板结构
  2. 包含命名规范
  3. 包含文档模板
  4. 包含错误处理模板
  5. 包含测试模板
  6. 包含性能优化模板
  7. 编写使用说明

输出

  • Skill模板
  • 使用文档
  • 示例代码

代码示例

示例1:高质量Skill实现

python
from typing import Dict, Any, List, Optional
from pydantic import BaseModel, Field, validator
import logging
from functools import lru_cache
import hashlib

class SentimentAnalysisParameters(BaseModel):
    text: str = Field(
        ...,
        min_length=1,
        max_length=10000,
        description="The text to analyze for sentiment"
    )
    model: str = Field(
        default="rule-based",
        regex="^(rule-based|ml|transformer)$",
        description="The sentiment analysis model to use"
    )
    include_scores: bool = Field(
        default=False,
        description="Whether to include confidence scores in the result"
    )
    
    @validator('text')
    def validate_text(cls, v):
        if not v.strip():
            raise ValueError('Text cannot be empty or whitespace only')
        return v.strip()

class SentimentAnalyzer:
    def __init__(self, log_level: str = "INFO"):
        self.logger = logging.getLogger(__name__)
        self.logger.setLevel(getattr(logging, log_level))
        
        handler = logging.StreamHandler()
        formatter = logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
        )
        handler.setFormatter(formatter)
        self.logger.addHandler(handler)
        
        self._cache = {}
    
    def execute(self, params: SentimentAnalysisParameters) -> Dict[str, Any]:
        self.logger.info(f"Starting sentiment analysis with model: {params.model}")
        
        try:
            result = self._analyze(params)
            
            self.logger.info("Sentiment analysis completed successfully")
            return {
                "success": True,
                "data": result,
                "metadata": {
                    "text_length": len(params.text),
                    "model": params.model,
                    "processing_time": 0.05
                },
                "error": None
            }
        except ValueError as e:
            self.logger.error(f"Validation error: {str(e)}")
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": f"Validation error: {str(e)}"
            }
        except Exception as e:
            self.logger.error(f"Internal error: {str(e)}", exc_info=True)
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": f"Internal error: {str(e)}"
            }
    
    def _analyze(self, params: SentimentAnalysisParameters) -> Dict[str, Any]:
        if params.model == "rule-based":
            return self._rule_based_analysis(params)
        elif params.model == "ml":
            return self._ml_analysis(params)
        elif params.model == "transformer":
            return self._transformer_analysis(params)
        else:
            raise ValueError(f"Unknown model: {params.model}")
    
    def _rule_based_analysis(self, params: SentimentAnalysisParameters) -> Dict[str, Any]:
        cache_key = self._generate_cache_key(params.text)
        
        if cache_key in self._cache:
            self.logger.debug("Using cached result")
            return self._cache[cache_key]
        
        positive_words = ['good', 'great', 'excellent', 'amazing', 'wonderful', 'fantastic']
        negative_words = ['bad', 'terrible', 'awful', 'horrible', 'poor', 'worst']
        
        text_lower = params.text.lower()
        positive_count = sum(1 for word in positive_words if word in text_lower)
        negative_count = sum(1 for word in negative_words if word in text_lower)
        
        if positive_count > negative_count:
            sentiment = "positive"
            score = positive_count / (positive_count + negative_count + 1)
        elif negative_count > positive_count:
            sentiment = "negative"
            score = negative_count / (positive_count + negative_count + 1)
        else:
            sentiment = "neutral"
            score = 0.5
        
        result = {
            "sentiment": sentiment,
            "score": score,
            "positive_words": [word for word in positive_words if word in text_lower],
            "negative_words": [word for word in negative_words if word in text_lower]
        }
        
        if params.include_scores:
            result["confidence"] = min(0.9, 0.5 + abs(score - 0.5))
        
        self._cache[cache_key] = result
        return result
    
    def _ml_analysis(self, params: SentimentAnalysisParameters) -> Dict[str, Any]:
        self.logger.warning("ML model not implemented, falling back to rule-based")
        return self._rule_based_analysis(params)
    
    def _transformer_analysis(self, params: SentimentAnalysisParameters) -> Dict[str, Any]:
        self.logger.warning("Transformer model not implemented, falling back to rule-based")
        return self._rule_based_analysis(params)
    
    def _generate_cache_key(self, text: str) -> str:
        return hashlib.md5(text.encode()).hexdigest()
    
    def clear_cache(self):
        self._cache.clear()
        self.logger.info("Cache cleared")

class BatchSentimentAnalyzer:
    def __init__(self, analyzer: SentimentAnalyzer):
        self.analyzer = analyzer
    
    def analyze_batch(self, texts: List[str], **kwargs) -> List[Dict[str, Any]]:
        results = []
        
        for text in texts:
            try:
                params = SentimentAnalysisParameters(text=text, **kwargs)
                result = self.analyzer.execute(params)
                results.append(result)
            except Exception as e:
                self.analyzer.logger.error(f"Failed to analyze text: {str(e)}")
                results.append({
                    "success": False,
                    "error": str(e)
                })
        
        return results
    
    def analyze_batch_parallel(
        self, 
        texts: List[str], 
        max_workers: int = 4,
        **kwargs
    ) -> List[Dict[str, Any]]:
        from concurrent.futures import ThreadPoolExecutor, as_completed
        
        results = [None] * len(texts)
        
        with ThreadPoolExecutor(max_workers=max_workers) as executor:
            future_to_index = {}
            
            for index, text in enumerate(texts):
                try:
                    params = SentimentAnalysisParameters(text=text, **kwargs)
                    future = executor.submit(self.analyzer.execute, params)
                    future_to_index[future] = index
                except Exception as e:
                    self.analyzer.logger.error(f"Failed to create parameters: {str(e)}")
                    results[index] = {
                        "success": False,
                        "error": str(e)
                    }
            
            for future in as_completed(future_to_index):
                index = future_to_index[future]
                try:
                    results[index] = future.result()
                except Exception as e:
                    self.analyzer.logger.error(f"Failed to analyze text: {str(e)}")
                    results[index] = {
                        "success": False,
                        "error": str(e)
                    }
        
        return results

示例2:完整的测试套件

python
import unittest
import tempfile
import shutil
from pathlib import Path
from sentiment_analyzer import (
    SentimentAnalyzer,
    SentimentAnalysisParameters,
    BatchSentimentAnalyzer
)

class TestSentimentAnalysisParameters(unittest.TestCase):
    def test_valid_parameters(self):
        params = SentimentAnalysisParameters(
            text="This is a great product!",
            model="rule-based",
            include_scores=True
        )
        self.assertEqual(params.text, "This is a great product!")
        self.assertEqual(params.model, "rule-based")
        self.assertTrue(params.include_scores)
    
    def test_default_parameters(self):
        params = SentimentAnalysisParameters(text="Test text")
        self.assertEqual(params.model, "rule-based")
        self.assertFalse(params.include_scores)
    
    def test_empty_text_validation(self):
        with self.assertRaises(ValueError):
            SentimentAnalysisParameters(text="")
    
    def test_whitespace_only_text_validation(self):
        with self.assertRaises(ValueError):
            SentimentAnalysisParameters(text="   ")
    
    def test_text_too_long_validation(self):
        with self.assertRaises(ValueError):
            SentimentAnalysisParameters(text="a" * 10001)
    
    def test_invalid_model_validation(self):
        with self.assertRaises(ValueError):
            SentimentAnalysisParameters(
                text="Test text",
                model="invalid-model"
            )

class TestSentimentAnalyzer(unittest.TestCase):
    def setUp(self):
        self.analyzer = SentimentAnalyzer(log_level="ERROR")
    
    def test_positive_sentiment(self):
        params = SentimentAnalysisParameters(
            text="This is a great and amazing product!",
            model="rule-based"
        )
        result = self.analyzer.execute(params)
        
        self.assertTrue(result["success"])
        self.assertEqual(result["data"]["sentiment"], "positive")
        self.assertGreater(result["data"]["score"], 0.5)
    
    def test_negative_sentiment(self):
        params = SentimentAnalysisParameters(
            text="This is a terrible and awful product!",
            model="rule-based"
        )
        result = self.analyzer.execute(params)
        
        self.assertTrue(result["success"])
        self.assertEqual(result["data"]["sentiment"], "negative")
        self.assertLess(result["data"]["score"], 0.5)
    
    def test_neutral_sentiment(self):
        params = SentimentAnalysisParameters(
            text="This is a product.",
            model="rule-based"
        )
        result = self.analyzer.execute(params)
        
        self.assertTrue(result["success"])
        self.assertEqual(result["data"]["sentiment"], "neutral")
    
    def test_include_scores(self):
        params = SentimentAnalysisParameters(
            text="This is a great product!",
            model="rule-based",
            include_scores=True
        )
        result = self.analyzer.execute(params)
        
        self.assertTrue(result["success"])
        self.assertIn("confidence", result["data"])
    
    def test_caching(self):
        params = SentimentAnalysisParameters(
            text="This is a great product!",
            model="rule-based"
        )
        
        result1 = self.analyzer.execute(params)
        result2 = self.analyzer.execute(params)
        
        self.assertEqual(result1["data"], result2["data"])
    
    def test_cache_clearing(self):
        params = SentimentAnalysisParameters(
            text="This is a great product!",
            model="rule-based"
        )
        
        self.analyzer.execute(params)
        self.analyzer.clear_cache()
        
        result = self.analyzer.execute(params)
        self.assertTrue(result["success"])

class TestBatchSentimentAnalyzer(unittest.TestCase):
    def setUp(self):
        self.analyzer = SentimentAnalyzer(log_level="ERROR")
        self.batch_analyzer = BatchSentimentAnalyzer(self.analyzer)
    
    def test_batch_analysis(self):
        texts = [
            "This is great!",
            "This is terrible!",
            "This is neutral."
        ]
        
        results = self.batch_analyzer.analyze_batch(texts)
        
        self.assertEqual(len(results), 3)
        self.assertTrue(all(result["success"] for result in results))
    
    def test_batch_analysis_with_errors(self):
        texts = [
            "This is great!",
            "",  # Invalid text
            "This is neutral."
        ]
        
        results = self.batch_analyzer.analyze_batch(texts)
        
        self.assertEqual(len(results), 3)
        self.assertTrue(results[0]["success"])
        self.assertFalse(results[1]["success"])
        self.assertTrue(results[2]["success"])
    
    def test_parallel_batch_analysis(self):
        texts = [
            "This is great!",
            "This is terrible!",
            "This is neutral."
        ]
        
        results = self.batch_analyzer.analyze_batch_parallel(texts, max_workers=2)
        
        self.assertEqual(len(results), 3)
        self.assertTrue(all(result["success"] for result in results))

class TestSentimentAnalyzerIntegration(unittest.TestCase):
    def setUp(self):
        self.temp_dir = tempfile.mkdtemp()
        self.skill_dir = Path(self.temp_dir) / "sentiment-analyzer"
        self.skill_dir.mkdir()
        
        self._create_skill_files()
        
        self.analyzer = SentimentAnalyzer(log_level="ERROR")
    
    def tearDown(self):
        shutil.rmtree(self.temp_dir)
    
    def _create_skill_files(self):
        skill_md = self.skill_dir / "skill.md"
        skill_md.write_text("""---
name: "sentiment-analyzer"
version: "1.0.0"
author: "Test Author <test@example.com>"
description: "Analyze sentiment of text"
tags: ["sentiment", "nlp", "text"]
license: "MIT"
python_requires: ">=3.8"
dependencies:
  - pydantic>=2.0.0
---
# Sentiment Analyzer

Analyze the sentiment of text using rule-based, ML, or transformer models.
""")
        
        src_dir = self.skill_dir / "src"
        src_dir.mkdir()
        
        main_py = src_dir / "main.py"
        main_py.write_text("""
from sentiment_analyzer import SentimentAnalyzer, SentimentAnalysisParameters

class SentimentAnalyzerSkill:
    def __init__(self):
        self.analyzer = SentimentAnalyzer()
    
    def execute(self, params: dict) -> dict:
        skill_params = SentimentAnalysisParameters(**params)
        return self.analyzer.execute(skill_params)
""")
    
    def test_skill_discovery(self):
        skill_md = self.skill_dir / "skill.md"
        self.assertTrue(skill_md.exists())
    
    def test_skill_execution(self):
        params = SentimentAnalysisParameters(
            text="This is a great product!",
            model="rule-based"
        )
        
        result = self.analyzer.execute(params)
        
        self.assertTrue(result["success"])
        self.assertEqual(result["data"]["sentiment"], "positive")

if __name__ == "__main__":
    unittest.main()

总结

Skills最佳实践是编写高质量、可维护、可扩展Skills的关键。本节我们学习了:

  1. 命名规范
  2. 文档编写规范
  3. 设计原则
  4. 错误处理最佳实践
  5. 性能优化技巧
  6. 测试和调试方法

遵循这些最佳实践,可以编写出高质量的Skills,提高开发效率和代码质量。

参考资源