The most common rules for subtitles are the Advanced Television Systems Committee (ATSC) standards, or the BBC or Netflix’s subtitle rules.
Here are some of the most basic criteria to keep in mind:
The unchanging rule when creating subtitles is that the start and end match the speaker’s words. Avoid letting the speaker not finish speaking but the subtitles have ended, or vice versa. This is all because of the viewer’s ability to read subtitles and view images.
The unchanging rule when creating subtitles is that the start and end match the dialogue, sometimes the song or the cry. Avoid leaving the dialogue unfinished without the subtitles ending, or vice versa.
But the truth is not that simple.
The duration for a subtitle should be 160-180 words/minute or 0.33-0375 words/second. However, depending on the video content and audience, this duration may vary:
However, the duration for a subtitle must not exceed 10 seconds, which could distract the viewer because the subtitle sentence is too long (except for the information on the screen).
When there is a dialogue, it should be presented in bullet form and the dialogue should not exceed 3 seconds. If the dialogue’s speakers are unknown or not showing faces, the speaker’s name/gender can be written in front of the line in the dialogue.
Usually do not take up too much space for a frame.
Subtitle line breaks are often ignored when subtitle creators focus too much on the above 40-CPL criterion, leading to mercilessly and painfully broken sentences. Subtitles are usually broken, but not limited to, in the following ways:
Prepositions and next action:
Linking words, punctuation marks and subject phrases:
Proper names and verbs:
Subtitles are often associated with the translation of movies and entertainment programs, but subtitling is much more extensive. Many legal, judicial, medical, scientific, technical, advertising, and religious videos require subtitles to preserve the original language while ensuring the multilingualism of the video.
In addition, there is now parallel subtitle translation for digital broadcasts. The reason for this demand is because (1) broadcasts on digital platforms often have a large audience and are mainly via electronic devices, that is, there is no dedicated equipment for parallel interpretation, ( 2) viewers focus a lot on the broadcast content itself, i.e. want to receive the content in its original form, without mixing other languages, (3) the speed of updating content is dizzying. Broadcasts often attract a lot of viewers in real time. The subtitled version will certainly be updated after the broadcast, but then the content is outdated and viewers often revisit it later because there is a need for the language, not the content.
Although subtitles appear slower and have many limitations, they have initially opened a new era for subtitle translation: subtitles can be provided side-by-side with the speaker as side-by-side interpreters of equal quality. But side-by-side subtitle translation is different from machine translation. Machine translators only provide near-instant translation when content is available, while live broadcasts provide no prior, even unpredictable content.
With the current state of chaotic and self-directed subtitle translation, it is necessary to set a standard for subtitle presentation in Vietnam. Although there is a separate censorship panel from broadcasters or agencies that specialize in content moderation, there is still no censorship system for the presentation of subtitles. In addition, subtitles need new software and technologies developed to make subtitles provided in real time a dream come true.