Skip to content

第29天:Skills开发基础

学习目标

  • 掌握Skills的目录结构
  • 学会编写skill.md文件
  • 能够开发简单的Skill
  • 理解Skills的文档驱动理念
  • 掌握参数定义和验证
  • 学会编写示例代码

核心内容

Skills目录结构详解

标准目录结构

Skills标准定义了标准的目录结构,便于组织和管理技能。

my-skill/
├── skill.md              # 技能定义文件(必需)
├── README.md             # 技能说明文档(推荐)
├── LICENSE               # 许可证文件(推荐)
├── pyproject.toml        # Python项目配置(推荐)
├── requirements.txt      # Python依赖列表(推荐)
├── src/                  # 源代码目录(推荐)
│   ├── __init__.py
│   └── main.py
├── tests/                # 测试目录(推荐)
│   ├── __init__.py
│   └── test_skill.py
└── examples/             # 示例目录(推荐)
    └── example.py

目录说明

目录/文件必需说明
skill.md技能定义文件,包含技能的描述、参数、示例等
README.md推荐技能说明文档,包含安装、使用方法等
LICENSE推荐许可证文件,说明技能的使用许可
pyproject.toml推荐Python项目配置文件,包含项目元数据和依赖
requirements.txt推荐Python依赖列表,列出所有依赖包
src/推荐源代码目录,存放技能的实现代码
tests/推荐测试目录,存放技能的测试代码
examples/推荐示例目录,存放技能的使用示例

skill.md文件规范

Front Matter

Front Matter是skill.md文件的元数据部分,使用YAML格式:

yaml
---
name: "my-skill"
version: "1.0.0"
author: "Your Name <email@example.com>"
description: "A brief description of the skill"
tags: ["tag1", "tag2", "tag3"]
license: "MIT"
python_requires: ">=3.8"
dependencies:
  - requests>=2.28.0
  - pandas>=1.5.0
---

字段说明

字段必需类型说明
namestring技能名称,使用kebab-case
versionstring版本号,遵循语义化版本规范
authorstring作者信息,包含姓名和邮箱
descriptionstring技能描述,简短说明技能的功能
tags推荐array技能标签,用于分类和搜索
license推荐string许可证类型,如MIT、Apache-2.0等
python_requires推荐stringPython版本要求
dependencies推荐arrayPython依赖列表

描述部分

描述部分详细说明技能的功能和用途:

markdown
## Description

The my-skill provides ability to perform specific tasks. It includes the following features:

- Feature 1: Description of feature 1
- Feature 2: Description of feature 2
- Feature 3: Description of feature 3

### Use Cases

This skill is suitable for:

- Use case 1
- Use case 2
- Use case 3

### Limitations

The following limitations should be noted:

- Limitation 1
- Limitation 2

参数定义

参数定义使用结构化的Markdown表格:

markdown
## Parameters

| Parameter | Type | Required | Description | Default | Constraints |
|-----------|------|----------|-------------|---------|-------------|
| param1 | string | Yes | Description of param1 | - | minLength: 1, maxLength: 100 |
| param2 | integer | No | Description of param2 | 10 | minimum: 1, maximum: 100 |
| param3 | array | No | Description of param3 | [] | items: string |
| param4 | object | No | Description of param4 | {} | - |

参数类型

  • string:字符串类型
  • integer:整数类型
  • number:数字类型(包括小数)
  • boolean:布尔类型
  • array:数组类型
  • object:对象类型

约束条件

  • minLength:字符串最小长度
  • maxLength:字符串最大长度
  • minimum:数字最小值
  • maximum:数字最大值
  • pattern:正则表达式模式
  • enum:枚举值列表
  • items:数组元素类型

返回值定义

返回值定义说明技能的输出格式:

markdown
## Returns

Returns a dictionary containing:

- `success` (boolean): Whether the operation was successful
- `data` (any): The result data
- `metadata` (object): Additional metadata
- `error` (string): Error message if failed

### Success Response

```json
{
  "success": true,
  "data": {
    "result": "value"
  },
  "metadata": {
    "timestamp": "2025-01-01T00:00:00Z"
  },
  "error": null
}

Error Response

json
{
  "success": false,
  "data": null,
  "metadata": {},
  "error": "Error message"
}

#### 示例代码

示例代码展示如何使用技能:

```markdown
## Examples

### Example 1: Basic usage

```python
from my_skill import MySkill

skill = MySkill()
result = skill.execute(
    param1="value1",
    param2=10
)

print(result)

Output:

json
{
  "success": true,
  "data": {...},
  "metadata": {...},
  "error": null
}

Example 2: Advanced usage

python
from my_skill import MySkill

skill = MySkill()
result = skill.execute(
    param1="value1",
    param2=10,
    param3=["item1", "item2"],
    param4={"key": "value"}
)

print(result)

### 能力定义方法

#### 基本能力定义

能力定义通过skill.md文件完成,包括能力的描述、参数、返回值等。

```yaml
---
name: "text-analyzer"
version: "1.0.0"
author: "Your Name"
description: "Analyze text content"
tags: ["text", "analysis"]
license: "MIT"
---

# Text Analyzer Skill

## Description

The text-analyzer skill provides ability to analyze text content, including sentiment analysis, keyword extraction, and text summarization.

## Parameters

| Parameter | Type | Required | Description | Default |
|-----------|------|----------|-------------|---------|
| text | string | Yes | Text to analyze | - |
| analysis_type | string | Yes | Type of analysis (sentiment, keywords, summary) | - |
| options | object | No | Additional options | {} |

## Returns

Returns analysis results.

## Examples

### Example 1: Sentiment analysis

```python
from text_analyzer import TextAnalyzer

analyzer = TextAnalyzer()
result = analyzer.analyze(
    text="This is a great product!",
    analysis_type="sentiment"
)

#### 复杂数据类型

对于复杂的数据类型,可以使用嵌套的表格定义:

```markdown
## Parameters

| Parameter | Type | Required | Description | Default |
|-----------|------|----------|-------------|---------|
| config | object | Yes | Configuration object | - |

### config

| Property | Type | Required | Description | Default |
|----------|------|----------|-------------|---------|
| model | string | Yes | Model name | - |
| parameters | object | No | Model parameters | {} |

### config.parameters

| Property | Type | Required | Description | Default |
|----------|------|----------|-------------|---------|
| temperature | number | No | Sampling temperature | 0.7 |
| max_tokens | integer | No | Maximum tokens | 100 |

参数定义规范

参数验证

参数验证是Skills开发的重要部分,确保输入参数的正确性。

python
from typing import Dict, Any, Optional
from pydantic import BaseModel, Field, validator

class SkillParameters(BaseModel):
    text: str = Field(..., min_length=1, max_length=10000)
    analysis_type: str = Field(..., regex="^(sentiment|keywords|summary)$")
    options: Optional[Dict[str, Any]] = Field(default_factory=dict)
    
    @validator('text')
    def validate_text(cls, v):
        if not v.strip():
            raise ValueError('Text cannot be empty')
        return v

参数类型定义

使用Python的类型提示来定义参数类型:

python
from typing import Dict, List, Optional, Union

def execute(
    text: str,
    analysis_type: str,
    options: Optional[Dict[str, Any]] = None,
    items: Optional[List[str]] = None,
    flag: bool = False
) -> Dict[str, Any]:
    pass

参数默认值

为可选参数提供合理的默认值:

python
def execute(
    text: str,
    analysis_type: str,
    options: Optional[Dict[str, Any]] = None,
    max_length: int = 1000,
    temperature: float = 0.7
) -> Dict[str, Any]:
    if options is None:
        options = {}
    
    # Use default values
    max_length = max_length or 1000
    temperature = temperature or 0.7

示例代码编写

基本实现

python
import json
from typing import Dict, Any, Optional
from pydantic import BaseModel, Field

class SkillParameters(BaseModel):
    text: str = Field(..., min_length=1, max_length=10000)
    analysis_type: str = Field(..., regex="^(sentiment|keywords|summary)$")
    options: Optional[Dict[str, Any]] = Field(default_factory=dict)

class TextAnalyzer:
    def __init__(self):
        pass
    
    def analyze(self, params: SkillParameters) -> Dict[str, Any]:
        try:
            if params.analysis_type == "sentiment":
                result = self._analyze_sentiment(params.text, params.options)
            elif params.analysis_type == "keywords":
                result = self._extract_keywords(params.text, params.options)
            elif params.analysis_type == "summary":
                result = self._summarize_text(params.text, params.options)
            else:
                raise ValueError(f"Unknown analysis type: {params.analysis_type}")
            
            return {
                "success": True,
                "data": result,
                "metadata": {
                    "analysis_type": params.analysis_type,
                    "text_length": len(params.text)
                },
                "error": None
            }
        except Exception as e:
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": str(e)
            }
    
    def _analyze_sentiment(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        return {
            "sentiment": "positive",
            "score": 0.8,
            "confidence": 0.9
        }
    
    def _extract_keywords(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        return {
            "keywords": ["AI", "machine learning", "deep learning"],
            "scores": [0.9, 0.8, 0.7]
        }
    
    def _summarize_text(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        return {
            "summary": "This is a summary of the text.",
            "original_length": len(text),
            "summary_length": 50
        }

高级实现

python
import re
from collections import Counter
from typing import Dict, Any, List
import json

class AdvancedTextAnalyzer:
    def __init__(self):
        self.stopwords = set([
            "the", "a", "an", "and", "or", "but", "in", "on", "at", "to", "for"
        ])
    
    def analyze(self, params: SkillParameters) -> Dict[str, Any]:
        try:
            if params.analysis_type == "sentiment":
                result = self._analyze_sentiment_advanced(params.text, params.options)
            elif params.analysis_type == "keywords":
                result = self._extract_keywords_advanced(params.text, params.options)
            elif params.analysis_type == "summary":
                result = self._summarize_text_advanced(params.text, params.options)
            else:
                raise ValueError(f"Unknown analysis type: {params.analysis_type}")
            
            return {
                "success": True,
                "data": result,
                "metadata": {
                    "analysis_type": params.analysis_type,
                    "text_length": len(params.text),
                    "word_count": len(params.text.split())
                },
                "error": None
            }
        except Exception as e:
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": str(e)
            }
    
    def _analyze_sentiment_advanced(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        positive_words = ["good", "great", "excellent", "amazing", "wonderful"]
        negative_words = ["bad", "terrible", "awful", "horrible", "poor"]
        
        words = text.lower().split()
        positive_count = sum(1 for word in words if word in positive_words)
        negative_count = sum(1 for word in words if word in negative_words)
        
        if positive_count > negative_count:
            sentiment = "positive"
            score = positive_count / (positive_count + negative_count + 1)
        elif negative_count > positive_count:
            sentiment = "negative"
            score = negative_count / (positive_count + negative_count + 1)
        else:
            sentiment = "neutral"
            score = 0.5
        
        return {
            "sentiment": sentiment,
            "score": score,
            "positive_count": positive_count,
            "negative_count": negative_count,
            "confidence": max(0.5, abs(score - 0.5) * 2)
        }
    
    def _extract_keywords_advanced(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        max_keywords = options.get("max_keywords", 10)
        min_word_length = options.get("min_word_length", 3)
        
        words = re.findall(r'\b\w+\b', text.lower())
        words = [word for word in words if len(word) >= min_word_length]
        words = [word for word in words if word not in self.stopwords]
        
        word_counts = Counter(words)
        top_keywords = word_counts.most_common(max_keywords)
        
        total_count = sum(word_counts.values())
        
        return {
            "keywords": [word for word, count in top_keywords],
            "scores": [count / total_count for word, count in top_keywords],
            "counts": [count for word, count in top_keywords]
        }
    
    def _summarize_text_advanced(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        max_sentences = options.get("max_sentences", 3)
        
        sentences = re.split(r'[.!?]+', text)
        sentences = [s.strip() for s in sentences if s.strip()]
        
        summary_sentences = sentences[:max_sentences]
        summary = '. '.join(summary_sentences) + '.'
        
        return {
            "summary": summary,
            "original_length": len(text),
            "summary_length": len(summary),
            "sentence_count": len(sentences),
            "summary_sentence_count": len(summary_sentences)
        }

文档注释规范

代码注释

为代码添加清晰的注释:

python
class TextAnalyzer:
    def __init__(self):
        pass
    
    def analyze(self, params: SkillParameters) -> Dict[str, Any]:
        try:
            if params.analysis_type == "sentiment":
                result = self._analyze_sentiment(params.text, params.options)
            elif params.analysis_type == "keywords":
                result = self._extract_keywords(params.text, params.options)
            elif params.analysis_type == "summary":
                result = self._summarize_text(params.text, params.options)
            else:
                raise ValueError(f"Unknown analysis type: {params.analysis_type}")
            
            return {
                "success": True,
                "data": result,
                "metadata": {
                    "analysis_type": params.analysis_type,
                    "text_length": len(params.text)
                },
                "error": None
            }
        except Exception as e:
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": str(e)
            }

Docstring

为函数和类添加docstring:

python
class TextAnalyzer:
    def __init__(self):
        pass
    
    def analyze(self, params: SkillParameters) -> Dict[str, Any]:
        """
        Analyze text based on the specified analysis type.
        
        Args:
            params: SkillParameters containing text and analysis type
            
        Returns:
            Dict containing success status, data, metadata, and error
            
        Raises:
            ValueError: If analysis type is unknown
        """
        pass

实践任务

任务1:创建Skill目录结构

创建一个完整的Skill目录结构。

步骤

  1. 创建Skill目录
  2. 创建必要的文件和目录
  3. 编写README.md
  4. 创建requirements.txt

输出

  • 完整的Skill目录结构
  • README.md文件
  • requirements.txt文件

任务2:编写skill.md文件

编写一个完整的skill.md文件。

步骤

  1. 编写Front Matter
  2. 编写描述部分
  3. 定义参数
  4. 定义返回值
  5. 编写示例

输出

  • 完整的skill.md文件
  • 参数定义表格
  • 示例代码

任务3:实现简单的Skill

实现一个简单的文本分析Skill。

步骤

  1. 创建Skill类
  2. 实现分析方法
  3. 添加参数验证
  4. 编写测试代码

输出

  • Skill实现代码
  • 参数验证代码
  • 测试代码

代码示例

示例1:完整的skill.md文件

yaml
---
name: "text-analyzer"
version: "1.0.0"
author: "Your Name <email@example.com>"
description: "Analyze text content"
tags: ["text", "analysis", "nlp"]
license: "MIT"
python_requires: ">=3.8"
dependencies:
  - pydantic>=2.0.0
  - requests>=2.28.0
---

# Text Analyzer Skill

## Description

The text-analyzer skill provides ability to analyze text content, including sentiment analysis, keyword extraction, and text summarization.

### Features

- Sentiment analysis: Determine the emotional tone of text
- Keyword extraction: Identify important keywords in text
- Text summarization: Generate concise summaries of text

### Use Cases

This skill is suitable for:

- Analyzing customer feedback
- Processing social media posts
- Summarizing articles
- Extracting insights from text data

### Limitations

The following limitations should be noted:

- Sentiment analysis is rule-based and may not be accurate for complex text
- Keyword extraction does not consider word context
- Summarization is extractive and may miss important information

## Parameters

| Parameter | Type | Required | Description | Default | Constraints |
|-----------|------|----------|-------------|---------|-------------|
| text | string | Yes | Text to analyze | - | minLength: 1, maxLength: 10000 |
| analysis_type | string | Yes | Type of analysis (sentiment, keywords, summary) | - | enum: ["sentiment", "keywords", "summary"] |
| options | object | No | Additional options for analysis | {} | - |

### options

| Property | Type | Required | Description | Default |
|----------|------|----------|-------------|---------|
| max_keywords | integer | No | Maximum number of keywords to extract | 10 | minimum: 1, maximum: 50 |
| max_sentences | integer | No | Maximum number of sentences in summary | 3 | minimum: 1, maximum: 10 |
| min_word_length | integer | No | Minimum word length for keywords | 3 | minimum: 2, maximum: 10 |

## Returns

Returns a dictionary containing:

- `success` (boolean): Whether the operation was successful
- `data` (any): The analysis results
- `metadata` (object): Additional metadata including analysis type, text length, word count
- `error` (string): Error message if failed

### Success Response

```json
{
  "success": true,
  "data": {
    "sentiment": "positive",
    "score": 0.8
  },
  "metadata": {
    "analysis_type": "sentiment",
    "text_length": 100,
    "word_count": 20
  },
  "error": null
}

Error Response

json
{
  "success": false,
  "data": null,
  "metadata": {},
  "error": "Invalid analysis type"
}

Examples

Example 1: Sentiment analysis

python
from text_analyzer import TextAnalyzer

analyzer = TextAnalyzer()
result = analyzer.analyze(
    text="This is a great product!",
    analysis_type="sentiment"
)

print(result)

Output:

json
{
  "success": true,
  "data": {
    "sentiment": "positive",
    "score": 0.8,
    "confidence": 0.9
  },
  "metadata": {
    "analysis_type": "sentiment",
    "text_length": 25,
    "word_count": 5
  },
  "error": null
}

Example 2: Keyword extraction

python
from text_analyzer import TextAnalyzer

analyzer = TextAnalyzer()
result = analyzer.analyze(
    text="AI and machine learning are transforming the world.",
    analysis_type="keywords",
    options={
        "max_keywords": 5,
        "min_word_length": 3
    }
)

print(result)

Output:

json
{
  "success": true,
  "data": {
    "keywords": ["AI", "machine", "learning", "transforming", "world"],
    "scores": [0.25, 0.25, 0.25, 0.125, 0.125]
  },
  "metadata": {
    "analysis_type": "keywords",
    "text_length": 52,
    "word_count": 8
  },
  "error": null
}

Example 3: Text summarization

python
from text_analyzer import TextAnalyzer

analyzer = TextAnalyzer()
result = analyzer.analyze(
    text="This is a long text that needs to be summarized. It contains multiple sentences. Each sentence provides information.",
    analysis_type="summary",
    options={
        "max_sentences": 2
    }
)

print(result)

Output:

json
{
  "success": true,
  "data": {
    "summary": "This is a long text that needs to be summarized. It contains multiple sentences.",
    "original_length": 105,
    "summary_length": 70
  },
  "metadata": {
    "analysis_type": "summary",
    "text_length": 105,
    "word_count": 18
  },
  "error": null
}

### 示例2:Skill实现代码

```python
import re
from collections import Counter
from typing import Dict, Any, Optional
from pydantic import BaseModel, Field, validator

class SkillParameters(BaseModel):
    text: str = Field(..., min_length=1, max_length=10000)
    analysis_type: str = Field(..., regex="^(sentiment|keywords|summary)$")
    options: Optional[Dict[str, Any]] = Field(default_factory=dict)
    
    @validator('text')
    def validate_text(cls, v):
        if not v.strip():
            raise ValueError('Text cannot be empty')
        return v

class TextAnalyzer:
    def __init__(self):
        self.stopwords = set([
            "the", "a", "an", "and", "or", "but", "in", "on", "at", "to", "for"
        ])
        self.positive_words = set([
            "good", "great", "excellent", "amazing", "wonderful", "fantastic",
            "awesome", "outstanding", "superb", "brilliant"
        ])
        self.negative_words = set([
            "bad", "terrible", "awful", "horrible", "poor", "disappointing",
            "unsatisfactory", "substandard", "inferior", "defective"
        ])
    
    def analyze(self, params: SkillParameters) -> Dict[str, Any]:
        try:
            if params.analysis_type == "sentiment":
                result = self._analyze_sentiment(params.text, params.options)
            elif params.analysis_type == "keywords":
                result = self._extract_keywords(params.text, params.options)
            elif params.analysis_type == "summary":
                result = self._summarize_text(params.text, params.options)
            else:
                raise ValueError(f"Unknown analysis type: {params.analysis_type}")
            
            return {
                "success": True,
                "data": result,
                "metadata": {
                    "analysis_type": params.analysis_type,
                    "text_length": len(params.text),
                    "word_count": len(params.text.split())
                },
                "error": None
            }
        except Exception as e:
            return {
                "success": False,
                "data": None,
                "metadata": {},
                "error": str(e)
            }
    
    def _analyze_sentiment(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        words = text.lower().split()
        positive_count = sum(1 for word in words if word in self.positive_words)
        negative_count = sum(1 for word in words if word in self.negative_words)
        
        if positive_count > negative_count:
            sentiment = "positive"
            score = positive_count / (positive_count + negative_count + 1)
        elif negative_count > positive_count:
            sentiment = "negative"
            score = negative_count / (positive_count + negative_count + 1)
        else:
            sentiment = "neutral"
            score = 0.5
        
        confidence = max(0.5, abs(score - 0.5) * 2)
        
        return {
            "sentiment": sentiment,
            "score": score,
            "positive_count": positive_count,
            "negative_count": negative_count,
            "confidence": confidence
        }
    
    def _extract_keywords(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        max_keywords = options.get("max_keywords", 10)
        min_word_length = options.get("min_word_length", 3)
        
        words = re.findall(r'\b\w+\b', text.lower())
        words = [word for word in words if len(word) >= min_word_length]
        words = [word for word in words if word not in self.stopwords]
        
        word_counts = Counter(words)
        top_keywords = word_counts.most_common(max_keywords)
        
        total_count = sum(word_counts.values())
        
        return {
            "keywords": [word for word, count in top_keywords],
            "scores": [count / total_count for word, count in top_keywords],
            "counts": [count for word, count in top_keywords]
        }
    
    def _summarize_text(self, text: str, options: Dict[str, Any]) -> Dict[str, Any]:
        max_sentences = options.get("max_sentences", 3)
        
        sentences = re.split(r'[.!?]+', text)
        sentences = [s.strip() for s in sentences if s.strip()]
        
        summary_sentences = sentences[:max_sentences]
        summary = '. '.join(summary_sentences) + '.'
        
        return {
            "summary": summary,
            "original_length": len(text),
            "summary_length": len(summary),
            "sentence_count": len(sentences),
            "summary_sentence_count": len(summary_sentences)
        }

示例3:测试代码

python
import unittest
from text_analyzer import TextAnalyzer, SkillParameters

class TestTextAnalyzer(unittest.TestCase):
    def setUp(self):
        self.analyzer = TextAnalyzer()
    
    def test_sentiment_analysis_positive(self):
        params = SkillParameters(
            text="This is a great and amazing product!",
            analysis_type="sentiment"
        )
        result = self.analyzer.analyze(params)
        
        self.assertTrue(result["success"])
        self.assertEqual(result["data"]["sentiment"], "positive")
        self.assertGreater(result["data"]["score"], 0.5)
    
    def test_sentiment_analysis_negative(self):
        params = SkillParameters(
            text="This is a terrible and awful product!",
            analysis_type="sentiment"
        )
        result = self.analyzer.analyze(params)
        
        self.assertTrue(result["success"])
        self.assertEqual(result["data"]["sentiment"], "negative")
        self.assertLess(result["data"]["score"], 0.5)
    
    def test_keyword_extraction(self):
        params = SkillParameters(
            text="AI and machine learning are transforming the world of technology.",
            analysis_type="keywords",
            options={"max_keywords": 5}
        )
        result = self.analyzer.analyze(params)
        
        self.assertTrue(result["success"])
        self.assertIn("AI", result["data"]["keywords"])
        self.assertIn("machine", result["data"]["keywords"])
        self.assertIn("learning", result["data"]["keywords"])
    
    def test_text_summarization(self):
        params = SkillParameters(
            text="This is a long text that needs to be summarized. It contains multiple sentences. Each sentence provides information.",
            analysis_type="summary",
            options={"max_sentences": 2}
        )
        result = self.analyzer.analyze(params)
        
        self.assertTrue(result["success"])
        self.assertIn("summary", result["data"])
        self.assertLess(result["data"]["summary_length"], result["data"]["original_length"])
    
    def test_invalid_analysis_type(self):
        params = SkillParameters(
            text="Test text",
            analysis_type="invalid"
        )
        result = self.analyzer.analyze(params)
        
        self.assertFalse(result["success"])
        self.assertIsNotNone(result["error"])

if __name__ == "__main__":
    unittest.main()

总结

Skills开发基础包括目录结构、skill.md文件规范、参数定义、示例代码编写等内容。通过本节学习,我们掌握了:

  1. Skills的标准目录结构
  2. skill.md文件的编写规范
  3. 参数定义和验证方法
  4. 示例代码的编写技巧
  5. 文档注释规范

下一步,我们将学习Skills文档驱动开发,深入了解文档驱动开发理念和Markdown高级特性。

参考资源