but i am having a bit trouble with matching the tags with regex in my extensions.
\nAs far as i know, the start should simply be something like start(src) { return src.match(/^:::foo\\n/)?.index; },.This only tells marked that this extension might be interested in processing the src.
Later, in the tokenizer, I would have something like:
\ntokenizer(src, tokens) {\n const rule = /^:::foo\\n([\\s\\S]*?)\\n:::/;\n const match = rule.exec(src);\n if (match) {\n const token = {\n type: 'foo',\n raw: match[0],\n tokens: []\n };\n this.lexer.blockTokens(match[1], token.tokens);\n return token;\n }\n },Normally I would end the rule with \\n to signal end of the block, but then I would be unable to match the :::: version because I would be looking at \\n:::\\n. But still, if I have nested elements, this is not working properly. So my question is how should these regexes look like to properly match the closing \"tags\" when nesting?
Maybe one thing to note is that in case of blockquote, the pattern is actually:
\n> quote\n> > nested\n> > quoteand it will look like this:
\n\n\nquote
\n\n\nnested
\n
\nquote
PS: I can add :{3,} to beginning and end of the patterns but that does not help because it will match opening tag of the parent but it will also match opening tag of the child as closing tag. So that is not working either.
I think i figured it out. Not sure if this is the \"right\" way to do it but it is working:
\nconst tabs = {\n name: 'tabs',\n level: 'block',\n start(src) { return src.match(/^:{3,}tabs\\n/)?.index; },\n tokenizer(src, tokens) {\n let count = 0\n // because this is a block token, we will actually receive \"\\n\" as first character.\n for (let i = 0; i < src.length; i++) {\n if (src.charAt(i) === ':') {\n count++\n continue;\n }\n if (count > 0) {\n break\n }\n }\n\n if (count === 0) {\n return\n }\n\n const pattern = `^:{${count}}tabs\\\\n([\\\\s\\\\S]*?)\\\\n:{${count}}`;\n const rule = new RegExp(pattern)\n const match = rule.exec(src);\n\n if (match) {\n const token = {\n type: 'tabs',\n raw: match[0],\n tokens: []\n };\n this.lexer.blockTokens(match[1], token.tokens);\n token.tokens = token.tokens.filter(t => t.type === 'tab')\n return token;\n }\n },\n};-
|
I have read that when one has a block tag that uses something like so for example: :::foo
normal element
:::
::::foo
:::foo
nested element
:::
::::
but i am having a bit trouble with matching the tags with regex in my extensions. As far as i know, the Later, in the tokenizer, I would have something like: tokenizer(src, tokens) {
const rule = /^:::foo\n([\s\S]*?)\n:::/;
const match = rule.exec(src);
if (match) {
const token = {
type: 'foo',
raw: match[0],
tokens: []
};
this.lexer.blockTokens(match[1], token.tokens);
return token;
}
},Normally I would end the rule with \n to signal end of the block, but then I would be unable to match the Maybe one thing to note is that in case of blockquote, the pattern is actually: > quote
> > nested
> > quoteand it will look like this:
PS: I can add |
Beta Was this translation helpful? Give feedback.
-
|
I think i figured it out. Not sure if this is the "right" way to do it but it is working: const tabs = {
name: 'tabs',
level: 'block',
start(src) { return src.match(/^:{3,}tabs\n/)?.index; },
tokenizer(src, tokens) {
let count = 0
// because this is a block token, we will actually receive "\n" as first character.
for (let i = 0; i < src.length; i++) {
if (src.charAt(i) === ':') {
count++
continue;
}
if (count > 0) {
break
}
}
if (count === 0) {
return
}
const pattern = `^:{${count}}tabs\\n([\\s\\S]*?)\\n:{${count}}`;
const rule = new RegExp(pattern)
const match = rule.exec(src);
if (match) {
const token = {
type: 'tabs',
raw: match[0],
tokens: []
};
this.lexer.blockTokens(match[1], token.tokens);
token.tokens = token.tokens.filter(t => t.type === 'tab')
return token;
}
},
}; |
Beta Was this translation helpful? Give feedback.
I think i figured it out. Not sure if this is the "right" way to do it but it is working: