On-Demand Webinar: Demystifying Audiovisual (AV) Experience Testing Using AI

  • last month
Video is no longer just for entertainment giants. It is rapidly becoming a cornerstone of the user experience across platforms, from language learning apps like Duolingo to product demonstrations on Amazon. While developers and QA engineers meticulously craft scripts using Selenium and Appium to ensure application functionality, the question remains: how can we automate testing for this vital video component without additional scripting? Watch our one-hour on-demand webinar to demystify the secrets of audiovisual experience testing using AI.

To know more about HeadSpin, visit: https://www.headspin.io/
Transcript
00:00So, I'm Dibangan, Dibangan Samanta, people fondly call me Dave over here, and I'm a Solutions
00:06and Offerings Manager.
00:08Now, my role typically is of a consultant for our clients.
00:13So whenever a client comes to us, I figure out what is the best strategy for them to
00:19transform their QA processes, and it involves multiple things.
00:23It could be their frameworks that they are choosing, it could be the systems that they
00:27want to integrate with, or simply put, how can they achieve or receive more ROI from
00:35the entire transformation project, right?
00:37It doesn't necessarily have to be the platform, it could be the processes, it could be the
00:42services, anything that there is, my goal is to make sure our clients get the ROI.
00:48But lately, I've been working more with the product team at Headspin, and we are radically
00:54improving our products, our services, our entire offering.
00:59So yeah, I think you guys should stay tuned for that.
01:03We are going to do many more webinars, multiple such topics, so I'm sure we'll discuss that
01:09in the near future.
01:10But then today, the topic at hand that we have is audiovisual experience testing using
01:17AI.
01:18Now, it's a very common topic, it has been going on in the industry, but right now, again,
01:25it is trending, people are discussing it, so I thought, let's do a revision, right?
01:30Let's talk about it once more, and let's talk about the developments that have happened,
01:34what is going on, how does the industry approach it?
01:38But before we even begin about the testing part of things, I think it's important that
01:42we define what can be considered as an audiovisual experience.
01:48And we can start it by breaking down the words, right?
01:51It's the simplest thing, very elementary, audio plus visual, that's audiovisual for
01:56you.
01:57Anything that involves sound and sight, hearing sight, it can be considered audiovisual.
02:03But here's the thing, here's the takeaway, whenever we say audiovisual, what comes to
02:08our mind?
02:10I have YouTube opened over here, I have Netflix opened over here, and then I also have Christiano's
02:15Instagram.
02:17Now, traditionally, whenever the word audiovisual experience has come up, it has to do with
02:24some kind of video or image content, content being the key over here.
02:30So even when we talk about YouTube, right, it's video content, when we talk about Netflix,
02:35it's long-form video content, and Instagram can be short-form video content, or maybe
02:41just static images.
02:44But end of the day, it's mostly about content, it's mostly about media that the people want
02:50to consume.
02:51And for the most part of it, yes, they are right.
02:54This is an experience, and it's definitely audiovisual.
02:59But the industry is growing, we've grown from web two to web three, we are seeing, sorry,
03:04web one to web two, actually, web three, it's a different thing.
03:08But in all of these, we have seen a lot of developments that has happened, right?
03:13We have websites that are now supporting Three.js, which is a 3D framework.
03:19And if I just look at this particular screen from Duolingo, right, it's an amazing application,
03:26love it.
03:27But when it comes to audiovisual experience of Duolingo, this is not a video platform,
03:33not by any means.
03:34But then they've got these tiny animations, right?
03:37All of these characters, they are animated.
03:39Anytime you click a button, there is audio feedback, just so that help, just so that
03:44you can get help to learn or answer the quizzes that the platform has.
03:50There are so many visual feedback and cues.
03:53Now, would we consider that audiovisual experiences?
03:57Of course we would, right?
03:58Because all of that helps with the engagement, right?
04:01It keeps us immersed.
04:03So if I have to answer my original question in what is audiovisual experience, I think
04:11safe to say in 2024, anything but a white screen, right?
04:15That's the best way I feel to put it across.
04:19Now the question is, these things, they are great, right?
04:24These experiences are great, keeps the users immersed.
04:27But does it have any real usage, right?
04:31Is it required in the real world?
04:33And the simple answer is, of course, yes.
04:37Why?
04:38These experiences, they keep your users immersed.
04:41Now what that means is your users are going to spend more time on the application, or
04:46maybe we can flip it a little bit.
04:49Your users will actually have hard time leaving your application, right?
04:54And that is, I think, sort of every product managers, every UX, UI designers dream.
05:02So through these experiences, we are trying to immerse the user, which means it increases
05:07our engagement, the session times.
05:10There are other metrics such as the MAU, DAO, we can talk about those metrics as well.
05:14But end of the day, what is happening is engagement goes up, retention goes up.
05:19Now when retention goes up, it's revenue, right?
05:22We are retaining our customers, but also we are growing our customer base.
05:27So all of this flows down to one thing, and that is revenue cash flow for the application.
05:36And it is a little more important in 2024, or maybe in this particular decade, because
05:44we haven't always focused on audiovisual experiences, right?
05:49We are focusing on it right now.
05:51Usually it used to be something that the industries like gaming and entertainment, social media,
05:57I even decided to include ed tech and learning management systems.
06:02So they, I mean, it's not your traditional entertainment, but then again, huge media
06:08firm, right?
06:10So these industries were the ones that focused more on that.
06:14And we are going to talk about testing frameworks in a few minutes.
06:20So before we deep dive into that, right, all of these companies, they've been doing it
06:24for a while.
06:26But the question is, right, whatever they have been doing, is it important for the other
06:30industries?
06:31We've got BFSI today, we've got retail and commerce, travel and hospitality, healthcare,
06:37automotive, telco, or simply enterprise applications, applications like let's say Atlassian, they've
06:44got Jira and Confluence, then there's SAP, which is an ERP, right?
06:49Do these applications really care about audio-visual experience or should they?
06:56Once again, the simple answer is yes, that is one of the reasons we are talking about
07:00it.
07:01But I've got a couple of examples, and these examples, right, really define what is happening
07:08to the industry.
07:09Initially, it started with Netflix, moving towards YouTube, and then Instagram, right?
07:16So sort of they've been able to dictate what users really want, dictate the habits of the
07:24users, right?
07:25These are all habit-forming products.
07:27Now once I'm engrossed in the user experience of Instagram, right, which is a social media
07:35application totally different from a, let's say, a banking application, I do want the
07:39same experience.
07:41My attention span has been reduced to what now Instagram dictates, right?
07:47And we see examples of this.
07:49Indigo, it's a popular airline, it's a carrier in India, they have started incorporating
07:56short-form video content on their website.
07:59Amazon, they've been doing it for a while now for almost all of their products, any
08:06expensive flagship products, they always have that product video.
08:10But recently, they've also included features where they are creating content, again, going
08:15back to content, but they've been doing that, right?
08:18Because that is what engages the users, allows the users to explore the product, and again,
08:24the more the engagement, the higher the chances that they add that to the cart, and then obviously
08:30see to the end of it.
08:32But another interesting one is this animation that I got from PornBee, and sorry, this is
08:38not the original PornBee, again, for everyone, let me just call it out, PornBee is one of
08:45our payment applications here in India, quite popular.
08:50Now, this is not the original application, I downloaded this from PornBee's Instagram,
08:56of course, because being a financial application, they have blocked everything, you cannot record
09:00your screen, you cannot take screenshots, which is a good thing, love it, and my payment
09:05apps protect me.
09:07The important part to focus over here is how PornBee had been iterating on their payment
09:17confirmation animation, and their idea was simple, right?
09:22When I scan the QR code, or when I put in the digits to make the payment, and hit the
09:27payment button, there's a time, right?
09:31There's a time difference, it's near real time, let's call it like that, but it's still,
09:35in some cases, right, depending upon your network, it can take up to 10 to 15 seconds,
09:41and what is happening to the user really in that 10 to 15 seconds?
09:47Are they calm?
09:48Are they frustrated?
09:49Are they figuring out, is the payment going to go through, there's a lot of things going
09:52on in their mind, and that is when PornBee's UX team, of course, they're the ones that
09:59figured out that, nope, we need a feedback loop, right?
10:04This loop that we started with making the payment, this needs to end on a good note,
10:09and that is when, I think it'll just pop up right now, the final one, the one with the
10:14entire green background, not this one, yeah, this one, right?
10:23This is the one that they finally went with, because when they actually tested it with
10:27the users, that one had the most effect, let's say, right, where the customers almost
10:35felt relieved looking at that, right?
10:37It sort of felt rewarding that, yes, you made the payment, and you waited for your 10 seconds,
10:43and yep, you got the confirmation.
10:45So that is the kind of role audiovisual experience is playing today on the entire digital native
10:52scene.
10:54And time and time again, I've called this out, we are in a digital native era.
11:00It doesn't matter what industry, your e-commerce application, your website, whatever you are,
11:06it's a digital native, right?
11:07You have to have something digital, and unfortunately, yep, there are other people who are dictating
11:14what the digital experience should be, but anyways, we're not here today to debate about
11:20what's going on, or what can we do about it, I mean, sure, we can invest, but the question
11:28is, right, our developers, our UI, UX engineers, they're going to spend time and effort improving
11:36products, right?
11:38As QA engineers, or even as developers, right, the question that we really have is, how do we
11:44test these audiovisual experiences?
11:47Because the biggest challenge when it comes, right, and this has been going on for a while
11:53now in the industry, is an audiovisual experience that's very subjective, right?
12:00There are no metrics.
12:02For example, if I have to run a functional test, if it passes, it passes, that is it.
12:07Or if I'm even running, let's say, non-functional tests, if I'm doing load testing, or I'm doing
12:13any kind of other performance testing, I have hardware metrics.
12:18I can see the memory usage.
12:20I can see the battery drain, the energy drain percentage.
12:24I can gather all of that data, but how do I figure out, how do I measure the audiovisual
12:32experience?
12:33Because for the most part of it, there are metrics, of course, for sure, right?
12:38Downsampling index, there are tons and tons of KPIs to actually figure it out.
12:43Resolution of the video, right?
12:45Or simply the speed the animation goes, right?
12:48There are multiple such things.
12:50But the one that we want to talk about today is perception, right?
12:55Some animations may be slow, some animations may be fast.
12:59But what's important is how your users perceive it.
13:03And thankfully, in the last decade with the developments in the research and the development
13:11and everything, we've been able to figure this out.
13:13How do we test it?
13:15And again, simple answer is using AI.
13:19But again, yeah, I'm not going to leave it at that.
13:23It'll be madness leaving it to simply giving your source code or just your video or your
13:29Lottie files to AI and letting it figure it out.
13:33How did it do?
13:34I mean, I hope we have something like that in the future.
13:37If we have it connected with an LLM, there's a chat sheet.
13:40You can upload stuff and then it'll tell you.
13:43But we do have a preliminary version of that.
13:47And again, when I say preliminary, I don't mean to the LLM ones, but it's a pretty robust
13:53one.
13:53And we've been doing it for a while.
13:56And thanks to obviously the pioneers in the industry, right?
13:59Again, calling Netflix and Warner Bros, in fact, even Prime and All, right?
14:05So huge, huge fans of these.
14:07They were the ones that actually figured it out.
14:09Of course, there are other players in the market who are improving it.
14:13But if I have to bring some method to the madness, there are two major approaches that
14:22we can take.
14:24Number one, being a reference-based approach.
14:27And the other one being a reference-free approach.
14:29And as the name suggests, it's very simple.
14:32For reference-based approach, we've got VMAF, which stands for Video Multimedia Assessment
14:39Fusion.
14:39I'll talk about it in a while.
14:41But it simply means that it needs a reference video.
14:47And on the other hand, we have reference-free.
14:49So right now, there are multiple ways you can do it without a reference.
14:54But the best one we feel, and maybe I'm a little biased being from Headspin, but I feel
15:00Headspin Vmos, Video Main Opinion Score, it's doing pretty good in the market.
15:06So those are the two approaches we take.
15:09But now the question is, how do these approaches really work?
15:12Or what are these?
15:15So moving on, I'm talking about VMAF.
15:21So VMAF, or Video Multimedia Assessment Fusion, it's a metric end of the day, which is used
15:26to evaluate the quality of compressed video streams.
15:29And it ranges from 1 to 100.
15:32So end of the day, these are all KPIs.
15:34So these have scores.
15:36For VMAF, it's 1 to 100.
15:39Now, thanks to Netflix, again, huge fan, huge fan of Netflix.
15:44Huge fan of the content and huge fan of the research and development that they have provided
15:49to the industry.
15:50And this is all open source, by the way.
15:52So after the session, if anyone wants to go and load this up, the source code is available
15:59on GitHub.
16:00So Netflix, yes, they developed it.
16:03And they did it to address the limitations of existing video quality metrics, such as
16:08DSNR, which stands for Peak Signal to Noise Ratio, and SSIM, Structural Similarity Index.
16:15Now, again, yes, those KPIs, they have their own limitations.
16:19But then these KPIs are still in the use.
16:23And I'll get to that.
16:25Why do we even have these, how do I put it, subset KPIs when we have a superset in the
16:31form of VMAF, right?
16:33One VMAF score, anything between 1 to 100 is good enough to tell you what the real picture
16:38is, where you stand in terms of the audiovisual experience that you're promising to your users.
16:44But we do have many such smaller KPIs.
16:47And the reason behind it is simple, right?
16:50Whenever we have, let's say, I screwed up my VMAF, I get a score of 20.
16:57How do I figure out where I went wrong?
16:58So that is when these subset KPIs that I like to call, they work.
17:02Now, coming back to VMAF.
17:05So how does VMAF really work?
17:07So we've got the pixel neighborhood.
17:09Basically, this is the part where we talk about the target video.
17:13And then obviously, we have AI models that we have been training.
17:17Now, VMAF utilizes several existing images, quality metrics, and features to derive the
17:23final score.
17:24But the main component of it is, one, the visual information fidelity, VIF over here
17:30under spatial feature extraction.
17:32That is used to assess the information fidelity loss across multiple spatial scales.
17:38So we divide, we typically, let's say, we would take a frame, we divide it into four
17:43or maybe even eight blocks.
17:45And these blocks are what actually make up the pixel neighborhood.
17:49And then we'd individually try to figure out.
17:52Then there's detail loss metric, which evaluates the loss of details and identifies
17:57impairments that could distract viewers.
18:00So ideally, we're talking about what the user would focus on versus what the user, what
18:06would be a detractor for the user.
18:09Now, once we have figured it out for the target image or not image, actually target media
18:15content, then our video to be keeping it very simple.
18:22Then we compare it against the model that we have trained, where we have more data,
18:28where we have the reference video, a full reference video to determine how does it
18:34fare against it.
18:35And that is how VMAF work.
18:37And the thing that I mentioned, how does it fare against it, that essentially is your
18:42VMAF score.
18:45Now, the thing about VMAF is, well, first of all, kudos to Netflix for doing such a
18:50great job.
18:51It's an amazing.
18:53Okay, I'll just take one second.
18:56So I see last slide went too fast.
18:58What was the difference between those two types?
19:02I'll quickly backtrack and then I'll get back over here.
19:07So the difference is simple.
19:09We have got two methods, VMAF and VMOS.
19:13I haven't talked about VMOS yet.
19:14I'll do it in a minute.
19:15But VMAF is the one that uses a reference.
19:19But VMAF is the one that uses a reference.
19:22So ideally, let's say I have a movie.
19:25Now, how do I say it's good or bad?
19:28Now, there are two ways to do it.
19:31I have another movie and I compare it against it, right?
19:34So based on looking at both the movies, I can figure out which one is better, which
19:39one is worse.
19:40And that is ideally what reference-based means.
19:44The second one is VMOS, where I take the movie, I break it down into a thousand little components,
19:51right?
19:52A thousand little KPIs.
19:54Ideally, that was the word that we would use in our software development lifecycle.
19:59So when we break it down into those KPIs, then we start measuring how that particular
20:06KPI, for example, let's say, keep it very simple, resolution, right?
20:10A 1080p video is going to provide better experience than a 720p video.
20:17But we'll mix it up a bit.
20:20How about a 1080p video in an environment where the internet, your download speed is
20:29just, let's say, 700 KBPS, not even a complete MBPS, right?
20:34So in that environment, what would you prefer?
20:37And that is how we determine, without any reference videos, how a video is going to
20:43perform, what is going to be the audiovisual experience of it.
20:47So again, basic difference to measure one, we do have a reference video, but the other,
20:53we don't have a reference video.
20:55Now, maybe this one more question that I want to answer, and I think I should have started
21:01with that, is why even use AI over here?
21:05So for us, it's very simple.
21:07I can look at a video and I can let you know my experience.
21:11And that is ideally what manual testing is.
21:15But of course, going back to our first question, audiovisual experience, right?
21:20It's quite subjective.
21:23So this is where manual testing can sort of take a toll, because for me, what is really
21:28good, what I consider good, the other users might not, and especially my paying users
21:33might not.
21:34So this is where AI comes in.
21:36Now, why AI?
21:38And these are, by the way, a mix of AI and ML models.
21:42So why AI and not a simple function?
21:45And again, the problem, it's subjective.
21:49But also in the recent years, say, I think it's been around a decade, CV algorithms,
21:59right?
21:59We've seen major developments in that segment.
22:02So we are able to actually, so we have got our CV algorithms, we've got our cloud.
22:07So we have abundant resources when we actually want to analyze something using these algorithms,
22:13right?
22:13We have got the best of the best processes.
22:17So this is just a more efficient way.
22:19And the second part of it is, one being efficiency, and the second one, what if I want to do,
22:24let's say, if I want to run 1,000 sessions in the next 10 hours?
22:29I'm just not going to run it one time on one device and figure it out.
22:32So in those cases, we need automation.
22:35And when we have, we need automation, we need frameworks like VMAP or VMOS to actually
22:42execute one single test multiple times, right?
22:44This is the same thing that we do with all our testing, all our Appium or XUI testing,
22:50but just a little different targets, different KPIs, and obviously different frameworks.
22:58So I think I'm good to move on VMAP, or should I just do a quick recap of it?
23:07With VMAP, we have very simple, so we've got a couple of, we train a model first.
23:11I'll just do a small recap.
23:13We train one model first, then we break down our existing video into a couple of parts,
23:21and then we compare it.
23:22And this, ideally what this is happening, obviously, this is an oversimplification,
23:26but I want you to understand this is where the major limitations of the VMAP framework start.
23:33We have got multiple limitations in terms of codecs, in terms of the video qualities,
23:38right?
23:38Be it HDR, be it without HDR, but I won't go into that.
23:42The biggest limitation that I see in VMAP is that it works best with videos of a few seconds.
23:49So if we have any long form content, that is where we start lacking, right?
23:55Because there are things like recency and primacy, or even rebuffering events, right?
24:01Because it's a few seconds, we don't give it a lot of time to process.
24:05And obviously, because somewhere VMAP has to have those capabilities limited.
24:11It's still in development.
24:13Netflix is still working on it.
24:15And it's not Netflix alone.
24:16There are a lot of other universities that are working on it.
24:19So let's hope something good happens over here.
24:24But what is the alternative of it?
24:27I have got long form videos.
24:29For example, the earlier example that we discussed of Duolingo.
24:35If I'm sitting on a learning screen, I'm clicking multiple buttons,
24:39and I'm going to multiple stages, which is multiple different animations.
24:43How do I test everything?
24:46And that is where Headspin's video quality MOSS comes in.
24:50MOSS stands for Mean Optimized Score.
24:54Now, what we realized is, right, when we actually were creating this platform,
24:59when we were working on the media and entertainment segment of it,
25:03and we decided to come up with video quality MOSS,
25:07is that various real world factors like device and voice specifications,
25:12network bandwidth, latency, jitter, packet losses, CDN misconfiguration,
25:18and even sometimes the mobile application client issues, the application itself.
25:24All of this influenced the delivery of video content through mobile applications.
25:29Now, to solve for this, we came up with a comprehensive suite of
25:34reference-free quality metrics to analyze and check for things like
25:39blockiness, blurriness, brightfulness, colorfulness,
25:43exposure, flickering, freezing, frame rate drops, and buffering time.
25:50Multiple, multiple other things, right?
25:52Multiple other small factors that are easy to measure.
25:56When we talk about the whole video, it becomes difficult because,
26:02again, experiences are subjective.
26:03But for example, if I take a very tiny metric, right?
26:07For example, how many frame drops do I have, right?
26:12How many frame rate drops I've had in a, let's say, a five-minute video.
26:16That alone can help me determine some part of the experience.
26:20And when you mix it with multiple other such factors, that is when we have VMOS.
26:26Now, today at Headspin, we are doing both the things, right?
26:32VMAP has its benefits.
26:35VMOS has its benefits.
26:36Of course, we created VMOS.
26:38And so we thought, you know, let's not miss, you know, let's not ignore one.
26:45Let's use the advantages of both.
26:48And we can see.
26:49So ideally, what happens is whenever you are executing a test on the Headspin platform,
26:54we first check for the basic KPIs, right?
26:56Blurriness, blockiness, brightfulness.
26:59Then we check for frame rate drops, loading, buffering time.
27:02And then we have our VMAP models on top of it.
27:05And we have our VMOS models on top of it, which analyzes both of the scores.
27:11And all of these scores, all of these KPIs that we are gathering,
27:16they can be seen in something, we call this the waterfall UI.
27:21So I just want you to pay quick attention to this particular part,
27:26where I can see the VMOS score for every second of this video being played.
27:32I can see the screen change.
27:33I can see the colorfulness.
27:34I can see the contrast.
27:35This is a screenshot.
27:37Otherwise, I would have scrolled and there would have been so many other KPIs.
27:41So we are gathering a second by second analysis of a particular video
27:48across so many other metrics.
27:50And together, these metrics, they make up the video quality most.
27:54That's the most score.
27:57And that is, these are the things that we are actually analyzing.
28:02Audiovisual experiences today.
28:05So yeah, that is all I wanted to talk about today.
28:11But yeah, if any of you have any questions,
28:13please feel free to drop it in the chat or the Q&A section.
28:17Would be happy to answer.
28:19So we have a question, how do we choose between VMAF and VMOS?
28:29Okay, that's a good question.
28:30More context, our client's application has a lot of animations and audio feedback.
28:36So it depends.
28:38It depends on the user.
28:39It depends on the user, but it's a good question.
28:42Has a lot of animations and audio feedback.
28:47So it depends.
28:48It depends.
28:49First of all, the way I think of it is,
28:52because I do this for a lot of our clients as well,
28:55is how easy it is to implement and what is going to be the impact of it.
29:00So for example, if I talk about, I'll just quickly scroll back.
29:05If I talk about VMOS specifically, there hasn't been a lot of,
29:11there are a lot of tools in the market,
29:12but when it comes to the most developed one and the most accurate,
29:15I like to believe it's Headspin.
29:17Whereas VMAF, it's a completely open source framework.
29:21Now, the thing with VMOS is, we have it on a cloud platform already.
29:26So implementing VMOS on your media applications
29:29or on any application like that, it's very easy.
29:32But on the other hand, with VMAF, while it may take some effort,
29:39again, the repository is available on GitHub.
29:42I would like to share the link with you somewhere,
29:45and the articles as well, maybe in a follow-up email for everyone later.
29:51But yeah, ideally, again, depending upon what kind of videos,
29:56what kind of, let's say, how long your content goes on for,
30:02we can actually decide whether it is to be VMAF or VMOS.
30:05But we have done multiple search implementations
30:08and what we have seen is having a mix of both is equally important.
30:14Because again, one thing about VMOS is we capture the entire session.
30:18That is why we have these annotations.
30:20They are all automated, but for VMAF, it cannot go on for long.
30:25But then again, if you have something very sharp, why not go for it?
30:30I hope that answered your question.
30:32Let me know if it didn't.
30:34Second question, do I have something?
30:37How do you measure colorfulness?
30:42So again, we do have OpenCV.
30:46We have CV-based, not OpenCV, of course.
30:48We did our own development.
30:49So we have CV-based algorithms to do that.
30:52And obviously, it has been trained on different colors,
30:56and that is how we figure out whether something can be considered colorful or not.
31:02Again, individually, I'd not focus too much on that particular KPI.
31:06But yes, together with contrast, with brightness, with blockiness and everything,
31:10it helps determine how the experience actually is.
31:17Which KPIs should we use to measure AV experience?
31:22So again, VMAF and VMOS, these are both frameworks, of course,
31:27but also these are the KPIs.
31:29One thing I forgot to mention, while VMAF, it has a rating of 1 to 100.
31:34So you can get a score out of 100 to determine the experience.
31:38For VMOS, we have percentage basis that you can see over here on the screenshot.
31:45But at the end of the day, the rating is something out of 5.
31:51So these are both KPIs.
31:53But again, if I talk about the subset KPIs that I mentioned earlier,
31:58that our models actually use, so there can be tons of things.
32:04For example, distortion indicators, delay, brightness, contrast, PSNR,
32:09the one we talked about earlier that Netflix tried to get rid of,
32:12not get rid of, but improve upon.
32:15So we still measure that.
32:17Then there's blurriness, blockiness, frame skips and freezes.
32:21Those are the ones I find a little bit interesting.
32:23Again, great for gaming and entertainment kind of media.
32:28Then there's effect frame rate.
32:29And also, it's not limited to just video.
32:32Again, audio-visual experience, right?
32:34Audio is included too.
32:35So there's MOS LQZO, there's R-factor, min-average, max-delay, jitter,
32:40attenuation, degradation, all sorts of degradation, right?
32:44Speech, pause, signal-to-noise ratio, all of those things included.
32:49And that is what makes up these mega scores.
32:53Let's put it that way.
32:57I hope that answers that question.
32:59So VMAF is for short videos.
33:03Not necessarily, no.
33:05Again, Netflix, they have long-form videos,
33:09but it depends upon the scope of your testing.
33:12So VMAF can only support, let's say, a few seconds at max, four to five seconds.
33:17So in those cases, yes, VMAF is helpful.
33:20But if you're planning to test the entire thing, if you're planning to,
33:24for example, another, maybe I'll share an example we did for a recent client of ours.
33:30It's a video conferencing tool, and they wanted to check the entire thing.
33:35So that is something not possible with VMAF.
33:38VMAF can determine how your media content or how the application with the media content is
33:45performing for those split few seconds.
33:47But for the entire experience, because let's say we are having the Zoom webinar,
33:52which is going on for, I think, a little over 30 minutes now.
33:55For the entire experience, how has it been?
33:58So for that to figure out, I think, VMOS is much more helpful.
34:02So if you already have a robust infra like Netflix, maybe VMAF is the best one.
34:08And they are doing it, so who am I to comment?
34:10But then again, for example, we work with a lot of telco clients.
34:15For them, VMOS has been a little more dominant.
34:17How much time and effort does it take to set up testing environments with these AI?
34:25So I think, OK, setting up these frameworks.
34:29Yes, that was one of the key factors while deciding which one to go for.
34:35While I say go for both, if you're implementing it in-house in your own server, in that case,
34:44VMAF, obviously, is going to take more time.
34:48You have to do the development.
34:49You have to do the heavy lifting up front.
34:52But if you're going with something like VMOS, V-Headspins, or any other tool in the market,
34:58we've already done the development for you.
34:59We have already hosted it on the cloud.
35:01So it's almost no effort.
35:04Of course, you have to execute that test.
35:06You have to execute that session.
35:08So maybe writing that Appium code, where that video or those animations, they are being run.
35:15So that is the only effort that it will take.
35:20I think this is the last question.
35:22Thanks for showing the architecture, et cetera.
35:24But I'm still not sure how exactly is the testing done.
35:27Maybe a sample demo using Headspin would be very helpful next time to visualize.
35:33100%, I agree.
35:36I should have maybe done a demo.
35:39Unfortunately, I don't have a session ready with me.
35:42So I cannot do a demo right now.
35:43But Jeet, how about this?
35:46Maybe we reach out to you.
35:48I'm sure we have your email ID when you registered.
35:51So I think I'll reach out to you and schedule that demo for you.
35:55But to answer your question very simply, how is the testing done?
36:00It's very simple, right?
36:02You run the test, right?
36:04Let's say it's a simple test session where multiple things are being executed,
36:09multiple user journeys are being executed.
36:11And majority of those user journeys include some form of media content,
36:15some form of video, some form of animation,
36:18or even maybe some form of engaging images, right?
36:22Anyway, now, after that session, what you'd like to figure out is
36:27what is the VMware score of it?
36:30Or if it's purely a video, what is the VMAP score of it?
36:34And it's easy to say if your VMAP score, let's say,
36:39is something over 50, maybe it's something like 60, 70,
36:43your test has passed, right?
36:44Because again, the experience that the user would perceive is on a higher level.
36:50Similarly, for VMOS, it's 1, 2, 3, 4, 5.
36:52So if it's a 5, it's amazing.
36:54If it's a 1, then we need to worry.
36:56Then we need to figure it out.
36:57Is it actually the animation content, right?
37:00That is where the RCA part of things would kick in,
37:03whether it's the content itself, or whether it's the network,
37:07whether it's the application, whether it's the device,
37:09we will figure that out.
37:11And of course, again, when I'll do the demo, right,
37:15maybe if we do it.
37:16Also, if there's anyone else who would like to see a demo,
37:19please let us know.
37:20Maybe we can do another webinar if we have more,
37:23or we can reach out to you individually.
37:25So we will show you, right, how do we do the RCA.
37:29You can see these issue cards, right?
37:31They also help you figure it out, right, what to do.
37:33So yeah, that is it.
37:41Any more questions?
37:49Oh, okay, a chat from Mohammed.
37:52Yep, sure.
37:57We will reach out to you for the demo as well.
38:02Awesome.
38:03If there's no more questions,
38:07I'll definitely reach out to you guys for the demo.
38:09But yeah, if there are no more questions,
38:11I think we can stop the session right now.
38:13And I'll give you back a few minutes.
38:15It's already nighttime over here.
38:17So thank you so much for attending the session.
38:21This is a topic that I've been working a lot on, right?
38:23We internally are working on the product.
38:26So would love to talk about it in more detail,
38:29maybe have some advanced sessions
38:31where we actually run these tests,
38:33where we actually run an Appium code
38:35and show you within the webinar itself,
38:38how does it look like, right?
38:40It'll be amazing to do.
38:42Maybe the part two of this particular session,
38:45I think we'll focus more on the demo.
38:47But yeah, for now, for the individual request,
38:49I'll definitely reach out to you guys.
38:51Thank you so much for joining us.
38:52Have a good night.
38:53Bye-bye.
38:59Bye-bye.

Recommended