--- task_categories: - summarization - text-generation language: - en tags: - code size_categories: - 10K : , : , : , : , : , : , : } ``` # The AST Data An extractor only focuses on specific nodes relevant for docstring generation denoted by this set: ``` KEEP_NODES = { 'FunctionDef', 'AsyncFunctionDef', 'ClassDef', 'arguments', 'arg', 'Return', 'If', 'For', 'While', 'Try', 'With', 'Assign', 'Call', 'Raise', 'ExceptHandler', 'decorator', 'bases', 'Compare', 'BoolOp' } ``` Everything else is discarded. For example **Source Code** ``` def tox_append_version_info() -> str: return '[toxfile]' ``` **Resulting AST Dictionary** ``` "ast_data": { "type": "FunctionDef", "children": [ { "type": "arguments", "args": [] }, { "type": "Return", "has_value": true } ], "name": "tox_append_version_info" } ``` This dictionary is then flattened via a helper function which would then look something like `FunctionDef name:tox_append_version_info arguments Return return:yes`. # Preprocessing The dataset generally follows [CodeBERT's code2nl](https://github.com/microsoft/CodeBERT/tree/master/CodeBERT/code2nl) dataset cleaning standards which are as follows: - Removed comments from the code - Removed examples where code cannot be parsed into an AST - Remove examples that documents contain special tokens (e.g. or https:...) - Remove examples that documents are not English Furthermore, the following cleaning steps specific to this dataset were applied: - Removed examples where, using CodeT5+'s tokenizer, the combined tokens of the source_code + ast_data is > 512 - Removed examples where, using CodeT5+'s tokenizer, the docstrings are > 512 # Final Statistics ``` { "original_samples": 128880, "processed_samples": 36536, "filter_stats": { "success": 36536, "non_english": 1185, "docstring_too_long": 1047, "input_too_long": 9185, "docstring_too_short": 74013, "error": 0, "error: unhashable type: 'list'": 6914 }, "split_sizes": { "train": 25575, "val": 5480, "test": 5481 }, "input_token_stats": { "min": 16, "max": 511, "avg": 161.956 }, "target_token_stats": { "min": 4, "max": 506, "avg": 72.421 }, "type_distribution": { "function": 9556, "method": 13019, "class": 3000 } } ``` # NOTE This dataset is *imperfect*. Use under your own volition.