--- task_categories: - summarization - text-generation language: - en tags: - code size_categories: - 10K : , : , : , : , : , : , : } ``` # The AST Data An extractor only focuses on specific nodes relevant for docstring generation denoted by this set: ``` KEEP_NODES = { 'FunctionDef', 'AsyncFunctionDef', 'ClassDef', 'arguments', 'arg', 'Return', 'If', 'For', 'While', 'Try', 'With', 'Assign', 'Call', 'Raise', 'ExceptHandler', 'decorator', 'bases', 'Compare', 'BoolOp' } ``` Everything else is discarded. For example **Source Code** ``` def tox_append_version_info() -> str: return '[toxfile]' ``` **Resulting AST Dictionary** ``` "ast_data": { "type": "FunctionDef", "children": [ { "type": "arguments", "args": [] }, { "type": "Return", "has_value": true } ], "name": "tox_append_version_info" } ``` This dictionary is then flattened via a helper function which would then look something like `FunctionDef name:tox_append_version_info arguments Return return:yes`. # Preprocessing The dataset generally follows [CodeBERT's code2nl](https://github.com/microsoft/CodeBERT/tree/master/CodeBERT/code2nl) dataset cleaning standards which are as follows: - Removed comments from the code - Removed examples where code cannot be parsed into an AST - Remove examples that codes cannot be parsed into an abstract syntax tree. - Remove examples that documents contain special tokens (e.g. or https:...) - Remove examples that documents are not English Furthermore, the following cleaning steps specific to this dataset were applied: - Removed examples where, using CodeT5+'s tokenizer, the combined tokens of the source_code + ast_data is > 512 - Removed examples where, using CodeT5+'s tokenizer, the docstrings are > 512 - Normalized the source code # Final Statistics ``` { "original_samples": 25481, "processed_samples": 22099, "filter_stats": { "success": 22099, "non_english": 44, "input_too_short": 0, "input_too_long": 3309, "target_too_short": 0, "target_too_long": 29, "error": 0 }, "split_sizes": { "train": 15469, "val": 3314, "test": 3316 }, "input_token_stats": { "min": 18, "max": 511, "avg": 155.931 }, "target_token_stats": { "min": 6, "max": 426, "avg": 67.264 }, "type_distribution": { "method": 8625, "class": 1434, "function": 5410 } } ``` # NOTE This dataset is *imperfect*. Use under your own volition.