What determines the quality of a AutoTranscript?

It all starts with a good recording. The M>D system makes a recording of every individual participant on their  laptop computer in the best non-filtered and non-compressed format. Because the audio is not streamed over the internet we don’t encounter band with restrictions and the quality of the recording is not influenced by WiFi or internet glitches. Internet connection may be lost for some time, without affecting the recording.

The individual recording per participant are fed separately into the Speech to Text Engine. The StTE has to deal with only one voice at a time, so may optimally adapt to the voice profile.

M>D uses some clever optimization strategies for obtaining the best possible AutoTranscript.

The text fragments generated per participant by the StTE are compiled into one document the AutoTranscript of the meeting.

The quality of the microphone is also a key factor, especially in noisy surroundings. The quality of the microphone of a PC may differ per brand and type. They are mostly sufficient. If you are in an noisy surrounding or when you are experiencing poor quality audio and consequently a low quality  AutoTranscript, please use a headset.

M>D has selected some excellent headsets and table top microphones  with superb noise cancellation and directional sensitivity. Please contact us.

The AutoTranscript  is better when participants are talking in a normal tone of voice and  in grammatically correct sentences.

Names might not always come out correctly. The same goes for high-tech language or trade-specific jargon. Please use the vocabulary function in such cases.

Can we retro-fit a version of M>D in our existing conference rooms?

Yes you may. This involves some technical enhancements like digitally splitting of the microphone  signals and in most cases the installation of high quality unidirectional microphones.

While editing an AutoTranscript we may hear the original recordings. Is there any audio-processing done to these recordings?

No, they are converted to MP3 format, but there’s no audio enhancement involved. You hear the original recording as is. The audio files that are fed into the M>D  AutoTranscribing engine are processed to get the best possible transcript. M>D uses several optimization strategies and algorithms, really clever ones, but these are the tricks of our trade.

How about several types of English spoken in the same meeting, US and UK English?

The language is now set for the whole meeting. We may add the feature that a language can be set per participant. A person should never speak in several languages during the same meeting….at least at this moment.  Please contact us for this type of issues.

How about dialects?

Heavy use of dialect will result in poor quality of the AutoTranscript. For the English language several versions are supported, eg South African and UK-English. You may select Dutch or Flamish.

What is the maximum number of participants in one meeting?

50 persons can be supported within one meeting Please contact us for larger meeting, so we can optimize M>D for a larger group for a specific instance on our servers.

Why would we need an in house, on premise implementation of M>D?

You can have a much better security. Standard version is already among the best on the market, but with your own set up of M>D you may implement security measures to an even higher standard.

You may customize M>D for your own needs. You may rebrand it or integrate it into your existing ICT infrastructure. Optimize AutoTranscript quality by selecting most applicable StTE for your organization. We developed tools for benchmarking StTE.

Add on of data analyzing systems, NLU and NLP technologies. In house training of AI functionally. Please contact us for further discussions about the subject.

Can M>D generate an auto caption – real time simultaneous transcript – while in the meeting?

This can be done in English as an extra feature. Possible for other languages for an on premise implementation. Quality is not as good as for the AutoTranscript made after the meeting, due to band with restrictions and WiFi glitches. For on premise implementation we may improve the quality by enhanced realtime band with management in the videoconferencing system. Not easy stuff. Please contact us.

How about specific words and Jargon?

Normally the system will not transcribe jargon correctly. You may use the vocabulary function. Before transcription you load a what we call phrase-set, with specific words and phrases for that meeting. Phrase-sets may be stored for a user or can be copied into M>D just before starting the AutoTranscript function.

Can you speak several languages within one meeting?

No, please stick to one language.

How many languages M>D is supporting.

In it’s current set up around 150 including several dialects of the same language. Quality may differ per language. English is the best handled language. Dutch is also quite OK.