SEGA debuts 14 minutes of English gameplay footage for Yakuza: Like a Dragon



The first day of IGN's Summer of Gaming continues, with the latest topic being a developer interview and broadcast of some English gameplay for the newest game in the Yakuza franchise. In the video, we get to see 14 minutes of footage, focusing on the new combat system, which is turn-based, as opposed to previous action-based entries. The game will feature a job system, minigames, and a brand new protagonist named Kasuga Ichiban. Yakuza: Like a Dragon is planned for release this year, for Xbox One, Xbox Series X, Steam, Microsoft Store, and PlayStation 4.
 

Ottoclav

Well-Known Member
Member
Joined
May 27, 2020
Messages
132
Trophies
0
Age
43
XP
290
Country
United States
I don't know that I have ever heard a dub is more immersive argument before. It stands to some reason, possibly even more in games if we don't see nice facial expressions to go along with it, but I don't think I can get to slating a work for not including one.

Easier to have something on in the background. I'm too lazy to read subs. Subs make me miss the action.

Sure all things that might well detract from some approaches.

Give it a couple of years to where voice synthesis (as it stands right now/some three or so years ago then sample size is already low to create a workable one, and pretty modest to "your mother could not tell") is type it in and select an emotion (which might even be auto suggested from context) and I imagine things will change as costs to do the dub drop through the floor.
Even faster would be to implement language voice actors that are monitored by software that is then loaded onto the face of the character in game. The software then controls the facial expressions of the character by translating the facial expressions of the voice actor. This software already exists. Basically if i were to use the software as a voice actor, i could translate my facial expressions onto the character in-game, and it will look like the character is actually the one speaking my words. The games would only need to be programmed to change the facial expressions and voices when a language is selected in game options. Sure, more money would be paid for voice actors, but the "dubbing" would be flawless. and the games would be larger because of the extra "language" content, but it would be awesome! I can't remember who developed the tech, but there was a big scare a few years back about it, because the program could be used to digitally impersonate a person, language, accents and facial expressions all included. Pretty cool, huh?
 
Last edited by Ottoclav,

FAST6191

Techromancer
Editorial Team
Joined
Nov 21, 2005
Messages
36,798
Trophies
3
XP
28,348
Country
United Kingdom
Even faster would be to implement language voice actors that are monitored by software that is then loaded onto the face of the character in game. The software then controls the facial expressions of the character by translating the facial expressions of the voice actor. This software already exists. Basically if i were to use the software as a voice actor, i could translate my facial expressions onto the character in-game, and it will look like the character is actually the one speaking my words. The games would only need to be programmed to change the facial expressions and voices when a language is selected in game options. Sure, more money would be paid for voice actors, but the "dubbing" would be flawless. and the games would be larger because of the extra "language" content, but it would be awesome! I can't remember who developed the tech, but there was a big scare a few years back about it, because the program could be used to digitally impersonate a person, language, accents and facial expressions all included. Pretty cool, huh?

In much the same way regular actors don't always make the best voice actors then voice actors tend not to be the most facially emotive (not to mention we also have to rely on)

Generating it via synthesis also allows for hundreds of hours of options to be kicked to a NPC just because, and then probably some dynamic stuff as well.

That said I would like to see a full facial muscles simulation go on. Don't know whether that would be an AI thing or they find the one person that knows facial anatomy, emotions/microexpressions and coding to spend a year or so making it.
 

TheSpiritofFF7

Member
Newcomer
Joined
Jun 8, 2020
Messages
8
Trophies
0
Age
27
XP
73
Country
Japan
That said I would like to see a full facial muscles simulation go on. Don't know whether that would be an AI thing or they find the one person that knows facial anatomy, emotions/microexpressions and coding to spend a year or so making it.

Its done via motion capture most of the time. The amount of time and manpower you'd need to program these manually is ridiculous, not to mention wasteful.
 

FAST6191

Techromancer
Editorial Team
Joined
Nov 21, 2005
Messages
36,798
Trophies
3
XP
28,348
Country
United Kingdom
Its done via motion capture most of the time. The amount of time and manpower you'd need to program these manually is ridiculous, not to mention wasteful.
Now it might be, however if you cracked a simulation (not individually rigging) of it similar to how arm joints these days usually bend properly so we don't have old school ragdoll physics you now get a cast of thousands for not a lot and that is something I would be interested in seeing the effects of.
 

Site & Scene News

Popular threads in this forum

General chit-chat
Help Users
  • No one is chatting at the moment.
    K3Nv2 @ K3Nv2: I'm devastated