任何人使用core-Audio成功完成脱机渲染.

我必须混合两个音频文件并应用混响(使用2 AudioFilePlayer,MultiChannelmixer,Reverb2和RemoteIO).
得到它的工作我可以保存它,而它的预览(在renderCallBack的RemoteIO).

我需要保存它而不玩它(离线).
提前致谢.

解决方法

离线渲染使用GenericOutput AudioUnit为我工作.
我在这里分享工作代码.
核心音频框架似乎有一点.但是像ASBD这样的小事情,参数…等正在产生这些问题.尝试努力,它会工作.不要放弃:-).核心音频在处理低级音频时非常强大和有用.这是我从最近几周学到的东西.享受:-D ….

在.h中声明这些

//AUGraph
AUGraph mGraph;
//Audio Unit References
AudioUnit mFilePlayer;
AudioUnit mFilePlayer2;
AudioUnit mReverb;
AudioUnit mTone;
AudioUnit mmixer;
AudioUnit mGIO;
//Audio File Location
AudioFileID inputFile;
AudioFileID inputFile2;
//Audio file refereces for saving
ExtAudioFileRef extAudioFile;
//Standard sample rate
Float64 graphSampleRate;
AudioStreamBasicDescription stereoStreamFormat864;

Float64 MaxSampleTime;

// in .m class

- (id) init
{
    self = [super init];
    graphSampleRate = 44100.0;
    MaxSampleTime   = 0.0;
    UInt32 category = kAudioSessionCategory_mediaplayback;
    Checkerror(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory,sizeof(category),&category),"Couldn't set category on audio session");
    [self initializeAUGraph];
    return self;
}

// ASBD设置

- (void) setupStereoStream864 {    
    // The AudioUnitSampleType data type is the recommended type for sample data in audio
    // units. This obtains the byte size of the type for use in filling in the ASBD.
    size_t bytesPerSample = sizeof (AudioUnitSampleType);
    // Fill the application audio format struct's fields to define a linear PCM,// stereo,noninterleaved stream at the hardware sample rate.
    stereoStreamFormat864.mFormatID          = kAudioFormatLinearPCM;
    stereoStreamFormat864.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
    stereoStreamFormat864.mBytesPerPacket    = bytesPerSample;
    stereoStreamFormat864.mFramesPerPacket   = 1;
    stereoStreamFormat864.mBytesPerFrame     = bytesPerSample;
    stereoStreamFormat864.mChannelsPerFrame  = 2; // 2 indicates stereo
    stereoStreamFormat864.mBitsPerChannel    = 8 * bytesPerSample;
    stereoStreamFormat864.mSampleRate        = graphSampleRate;
}

// AUGraph设置

- (void)initializeAUGraph
{
    [self setupStereoStream864];

    // Setup the AUGraph,add AUNodes,and make connections
// create a new AUGraph
Checkerror(NewAUGraph(&mGraph),"Couldn't create new graph");

// AUNodes represent AudioUnits on the AUGraph and provide an
// easy means for connecting audioUnits together.
    AUNode filePlayerNode;
    AUNode filePlayerNode2;
AUNode mixerNode;
AUNode reverbNode;
AUNode toneNode;
AUNode gOutputNode;

// file player component
    AudioComponentDescription filePlayer_desc;
filePlayer_desc.componentType = kAudioUnitType_Generator;
filePlayer_desc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
filePlayer_desc.componentFlags = 0;
filePlayer_desc.componentFlagsMask = 0;
filePlayer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// file player component2
    AudioComponentDescription filePlayer2_desc;
filePlayer2_desc.componentType = kAudioUnitType_Generator;
filePlayer2_desc.componentSubType = kAudioUnitSubType_AudioFilePlayer;
filePlayer2_desc.componentFlags = 0;
filePlayer2_desc.componentFlagsMask = 0;
filePlayer2_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// Create AudioComponentDescriptions for the AUs we want in the graph
// mixer component
AudioComponentDescription mixer_desc;
mixer_desc.componentType = kAudioUnitType_mixer;
mixer_desc.componentSubType = kAudioUnitSubType_MultiChannelmixer;
mixer_desc.componentFlags = 0;
mixer_desc.componentFlagsMask = 0;
mixer_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

// Create AudioComponentDescriptions for the AUs we want in the graph
// Reverb component
AudioComponentDescription reverb_desc;
reverb_desc.componentType = kAudioUnitType_Effect;
reverb_desc.componentSubType = kAudioUnitSubType_Reverb2;
reverb_desc.componentFlags = 0;
reverb_desc.componentFlagsMask = 0;
reverb_desc.componentManufacturer = kAudioUnitManufacturer_Apple;


//tone component
    AudioComponentDescription tone_desc;
tone_desc.componentType = kAudioUnitType_FormatConverter;
//tone_desc.componentSubType = kAudioUnitSubType_NewTimePitch;
    tone_desc.componentSubType = kAudioUnitSubType_Varispeed;
tone_desc.componentFlags = 0;
tone_desc.componentFlagsMask = 0;
tone_desc.componentManufacturer = kAudioUnitManufacturer_Apple;


    AudioComponentDescription gOutput_desc;
gOutput_desc.componentType = kAudioUnitType_Output;
gOutput_desc.componentSubType = kAudioUnitSubType_GenericOutput;
gOutput_desc.componentFlags = 0;
gOutput_desc.componentFlagsMask = 0;
gOutput_desc.componentManufacturer = kAudioUnitManufacturer_Apple;

//Add nodes to graph

// Add nodes to the graph to hold our AudioUnits,// You pass in a reference to the  AudioComponentDescription
// and get back an  AudioUnit
    AUGraphAddNode(mGraph,&filePlayer_desc,&filePlayerNode );
    AUGraphAddNode(mGraph,&filePlayer2_desc,&filePlayerNode2 );
    AUGraphAddNode(mGraph,&mixer_desc,&mixerNode );
    AUGraphAddNode(mGraph,&reverb_desc,&reverbNode );
    AUGraphAddNode(mGraph,&tone_desc,&toneNode );
AUGraphAddNode(mGraph,&gOutput_desc,&gOutputNode);


//Open the graph early,initialize late
// open the graph AudioUnits are open but not initialized (no resource allocation occurs here)

Checkerror(AUGraphOpen(mGraph),"Couldn't Open the graph");

//Reference to Nodes
// get the reference to the AudioUnit object for the file player graph node
AUGraphNodeInfo(mGraph,filePlayerNode,NULL,&mFilePlayer);
AUGraphNodeInfo(mGraph,filePlayerNode2,&mFilePlayer2);
    AUGraphNodeInfo(mGraph,reverbNode,&mReverb);
    AUGraphNodeInfo(mGraph,toneNode,&mTone);
    AUGraphNodeInfo(mGraph,mixerNode,&mmixer);
AUGraphNodeInfo(mGraph,gOutputNode,&mGIO);

    AUGraphConnectNodeInput(mGraph,0);
    AUGraphConnectNodeInput(mGraph,1);
AUGraphConnectNodeInput(mGraph,0);


    UInt32 busCount   = 2;    // bus count for mixer unit input

//Setup mixer unit bus count
    Checkerror(AudioUnitSetProperty (
                                 mmixer,kAudioUnitProperty_ElementCount,kAudioUnitScope_Input,&busCount,sizeof (busCount)
                                 ),"Couldn't set mixer unit's bus count");

//Enable metering mode to view levels input and output levels of mixer
    UInt32 onValue = 1;
    Checkerror(AudioUnitSetProperty(mmixer,kAudioUnitProperty_MeteringMode,&onValue,sizeof(onValue)),"error");

// Increase the maximum frames per slice allows the mixer unit to accommodate the
//    larger slice size used when the screen is locked.
    UInt32 maximumFramesPerSlice = 4096;

    Checkerror(AudioUnitSetProperty (
                                 mmixer,kAudioUnitProperty_MaximumFramesPerSlice,kAudioUnitScope_Global,&maximumFramesPerSlice,sizeof (maximumFramesPerSlice)
                                 ),"Couldn't set mixer units maximum framers per slice");

// set the audio data format of tone Unit
    AudioUnitSetProperty(mTone,kAudioUnitProperty_StreamFormat,&stereoStreamFormat864,sizeof(AudioStreamBasicDescription));
// set the audio data format of reverb Unit
    AudioUnitSetProperty(mReverb,sizeof(AudioStreamBasicDescription));

// set initial reverb
    AudioUnitParameterValue reverbTime = 2.5;
    AudioUnitSetParameter(mReverb,4,reverbTime,0);
    AudioUnitSetParameter(mReverb,5,0);

    AudioStreamBasicDescription     auEffectStreamFormat;
    UInt32 asbdSize = sizeof (auEffectStreamFormat);
memset (&auEffectStreamFormat,sizeof (auEffectStreamFormat ));

// get the audio data format from reverb
Checkerror(AudioUnitGetProperty(mReverb,&auEffectStreamFormat,&asbdSize),"Couldn't get aueffectunit ASBD");


    auEffectStreamFormat.mSampleRate = graphSampleRate;

// set the audio data format of mixer Unit
    Checkerror(AudioUnitSetProperty(mmixer,kAudioUnitScope_Output,sizeof(auEffectStreamFormat)),"Couldn't set ASBD on mixer output");

Checkerror(AUGraphInitialize(mGraph),"Couldn't Initialize the graph");

    [self setUpAUFilePlayer];
    [self setUpAUFilePlayer2];  
}

//音频文件播放设置在这里我正在设置语音文件

-(Osstatus) setUpAUFilePlayer{
Nsstring *songPath = [[NSBundle mainBundle] pathForResource: @"testVoice" ofType:@".m4a"];
CFURLRef songURL = ( CFURLRef) [NSURL fileURLWithPath:songPath];

// open the input audio file
Checkerror(AudioFileOpenURL(songURL,kAudioFileReadPermission,&inputFile),"setUpAUFilePlayer AudioFileOpenURL Failed");

AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
Checkerror(AudioFileGetProperty(inputFile,kAudioFilePropertyDataFormat,&propSize,&fileASBD),"setUpAUFilePlayer Couldn't get file's data format");

// tell the file player unit to load the file we want to play
Checkerror(AudioUnitSetProperty(mFilePlayer,kAudioUnitProperty_ScheduledFileIDs,&inputFile,sizeof(inputFile)),"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] Failed");

UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
Checkerror(AudioFileGetProperty(inputFile,kAudioFilePropertyAudioDataPacketCount,&propsize,&nPackets),"setUpAUFilePlayer AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] Failed");

// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp,sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile;
rgn.mLoopCount = -1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * fileASBD.mFramesPerPacket;

if (MaxSampleTime < rgn.mFramesToPlay)
{
    MaxSampleTime = rgn.mFramesToPlay;
}

Checkerror(AudioUnitSetProperty(mFilePlayer,kAudioUnitProperty_ScheduledFileRegion,&rgn,sizeof(rgn)),"setUpAUFilePlayer1 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] Failed");

// prime the file player AU with default values
UInt32 defaultVal = 0;

Checkerror(AudioUnitSetProperty(mFilePlayer,kAudioUnitProperty_ScheduledFilePrime,&defaultVal,sizeof(defaultVal)),"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] Failed");


// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime,sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;

startTime.mSampleTime = -1;
Checkerror(AudioUnitSetProperty(mFilePlayer,kAudioUnitProperty_ScheduleStartTimeStamp,&startTime,sizeof(startTime)),"setUpAUFilePlayer AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");

return noErr;  
}

//音频文件播放设置在这里我正在设置BGMusic文件

-(Osstatus) setUpAUFilePlayer2{
Nsstring *songPath = [[NSBundle mainBundle] pathForResource: @"BGmusic" ofType:@".mp3"];
CFURLRef songURL = ( CFURLRef) [NSURL fileURLWithPath:songPath];

// open the input audio file
Checkerror(AudioFileOpenURL(songURL,&inputFile2),"setUpAUFilePlayer2 AudioFileOpenURL Failed");

AudioStreamBasicDescription fileASBD;
// get the audio data format from the file
UInt32 propSize = sizeof(fileASBD);
Checkerror(AudioFileGetProperty(inputFile2,"setUpAUFilePlayer2 Couldn't get file's data format");

// tell the file player unit to load the file we want to play
Checkerror(AudioUnitSetProperty(mFilePlayer2,&inputFile2,sizeof(inputFile2)),"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileIDs] Failed");

UInt64 nPackets;
UInt32 propsize = sizeof(nPackets);
Checkerror(AudioFileGetProperty(inputFile2,"setUpAUFilePlayer2 AudioFileGetProperty[kAudioFilePropertyAudioDataPacketCount] Failed");

// tell the file player AU to play the entire file
ScheduledAudioFileRegion rgn;
memset (&rgn.mTimeStamp,sizeof(rgn.mTimeStamp));
rgn.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
rgn.mTimeStamp.mSampleTime = 0;
rgn.mCompletionProc = NULL;
rgn.mCompletionProcUserData = NULL;
rgn.mAudioFile = inputFile2;
rgn.mLoopCount = -1;
rgn.mStartFrame = 0;
rgn.mFramesToPlay = nPackets * fileASBD.mFramesPerPacket;


if (MaxSampleTime < rgn.mFramesToPlay)
{
    MaxSampleTime = rgn.mFramesToPlay;
}

Checkerror(AudioUnitSetProperty(mFilePlayer2,"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFileRegion] Failed");

// prime the file player AU with default values
UInt32 defaultVal = 0;
Checkerror(AudioUnitSetProperty(mFilePlayer2,"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduledFilePrime] Failed");


// tell the file player AU when to start playing (-1 sample time means next render cycle)
AudioTimeStamp startTime;
memset (&startTime,sizeof(startTime));
startTime.mFlags = kAudioTimeStampSampleTimeValid;
startTime.mSampleTime = -1;
Checkerror(AudioUnitSetProperty(mFilePlayer2,"setUpAUFilePlayer2 AudioUnitSetProperty[kAudioUnitProperty_ScheduleStartTimeStamp]");

return noErr;  
}

//开始保存文件

- (void)startRecordingAAC{
AudioStreamBasicDescription destinationFormat;
memset(&destinationFormat,sizeof(destinationFormat));
destinationFormat.mChannelsPerFrame = 2;
destinationFormat.mFormatID = kAudioFormatMPEG4AAC;
UInt32 size = sizeof(destinationFormat);
Osstatus result = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,&size,&destinationFormat);
if(result) printf("AudioFormatGetProperty %ld \n",result);
NSArray  *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask,YES);
Nsstring *documentsDirectory = [paths objectAtIndex:0];



Nsstring *destinationFilePath = [[Nsstring alloc] initWithFormat: @"%@/output.m4a",documentsDirectory];
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kcfAllocatorDefault,(CFStringRef)destinationFilePath,kcfURLPOSIXPathStyle,false);
[destinationFilePath release];

// specify codec Saving the output in .m4a format
result = ExtAudioFileCreateWithURL(destinationURL,kAudioFileM4AType,&destinationFormat,kAudioFileFlags_EraseFile,&extAudioFile);
if(result) printf("ExtAudioFileCreateWithURL %ld \n",result);
CFRelease(destinationURL);

// This is a very important part and easiest way to set the ASBD for the File with correct format.
AudioStreamBasicDescription clientFormat;
UInt32 fSize = sizeof (clientFormat);
memset(&clientFormat,sizeof(clientFormat));
// get the audio data format from the Output Unit
Checkerror(AudioUnitGetProperty(mGIO,&clientFormat,&fSize),"AudioUnitGetProperty on Failed");

// set the audio data format of mixer Unit
Checkerror(ExtAudioFileSetProperty(extAudioFile,kExtAudioFileProperty_ClientDataFormat,sizeof(clientFormat),&clientFormat),"ExtAudioFileSetProperty kExtAudioFileProperty_ClientDataFormat Failed");
// specify codec
UInt32 codec = kAppleHardwareAudioCodecManufacturer;
Checkerror(ExtAudioFileSetProperty(extAudioFile,kExtAudioFileProperty_CodecManufacturer,sizeof(codec),&codec),"ExtAudioFileSetProperty on extAudioFile Faild");

Checkerror(ExtAudioFileWriteAsync(extAudioFile,NULL),"ExtAudioFileWriteAsync Failed");

[self pullGenericOutput];
}

//从GenericOutput节点手动输入和获取数据/缓冲区.

-(void)pullGenericOutput{
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp,sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
int channelCount = 2;

NSLog(@"Final numberFrames :%li",numberFrames);
int totFrms = MaxSampleTime;
while (totFrms > 0)
{
    if (totFrms < numberFrames)
    {
        numberFrames = totFrms;
        NSLog(@"Final numberFrames :%li",numberFrames);
    }
    else
    {
        totFrms -= numberFrames;
    }
    audiobufferlist *bufferList = (audiobufferlist*)malloc(sizeof(audiobufferlist)+sizeof(AudioBuffer)*(channelCount-1));
    bufferList->mNumberBuffers = channelCount;
    for (int j=0; j<channelCount; j++)
    {
        AudioBuffer buffer = {0};
        buffer.mNumberChannels = 1;
        buffer.mDataByteSize = numberFrames*sizeof(AudioUnitSampleType);
        buffer.mData = calloc(numberFrames,sizeof(AudioUnitSampleType));

        bufferList->mBuffers[j] = buffer;

    }
    Checkerror(AudioUnitRender(mGIO,&flags,&inTimeStamp,busNumber,numberFrames,bufferList),"AudioUnitRender mGIO");



    Checkerror(ExtAudioFileWrite(extAudioFile,("extaudiofilewrite fail"));

}

[self FilesSavingCompleted];
}

// FilesSavingCompleted

-(void)FilesSavingCompleted{
Osstatus status = ExtAudioFiledispose(extAudioFile);
printf("Osstatus(ExtAudioFiledispose): %ld\n",status);
}

ios – 核心音频离线渲染GenericOutput的更多相关文章

  1. ios – 确定核心音频AudioBuffer中的帧数

    我正在尝试访问iPhone/iPad上的音频文件的原始数据.我有以下代码,这是我需要的路径的基本开始.但是,一旦我有了一个AudioBuffer,我就不知道该怎么做了.基本上我不知道如何判断每个缓冲区包含多少帧,因此我无法从它们中可靠地提取数据.我是处理原始音频数据的新手,所以我对如何最好地读取AudioBuffer结构的mData属性有任何建议.我在过去也没有做过很多关于void指针的事情,所以在这种情况下对它的帮助也会很棒!

  2. iOS – 生成并播放无限简单的音频(正弦波)

    我正在寻找一个非常简单的iOS应用程序,它带有一个启动和停止音频信号的按钮.信号只是一个正弦波,它将在整个播放过程中检查我的模型,并相应地改变音量.我的困难与任务的不确定性有关.我理解如何构建表格,填充数据,响应按钮按下等等;然而,当谈到只是无限期地继续时,我有点卡住了!任何指针都会很棒!

  3. ios – 核心音频离线渲染GenericOutput

    等正在产生这些问题.尝试努力,它会工作.不要放弃:-).核心音频在处理低级音频时非常强大和有用.这是我从最近几周学到的东西.享受:-D…

  4. 如何正确使用iOS(Swift)SceneKit SCNSceneRenderer unprojectPoint

    那么,如果那架飞机与摄像机是正交的–那就是你的帮助.那么你需要做的就是在那架飞机上投射一点:现在,您可以在三维视图中拥有世界起源的位置归一化深度空间.要将2D视图空间中的其他点映射到此平面上,请使用此矢量中的z坐标:这让您在世界空间中将点击/分接位置映射到z=0平面,适合用作节点的位置,如果要向用户显示该位置.

  5. Swift:如何使用sizeof?

    为了在使用Swift时与CAPI集成,我需要使用sizeof函数。在C,这很容易。在Swift,我在一个迷宫式的错误。为什么是这个,我如何解决它?如果你想要anInt变量的大小,你可以将dynamicType字段传递给sizeof。

  6. Swift是否保证类和结构中字段的存储顺序?

    在C中,您在结构中定义字段的顺序是它们将在内存中实例化的顺序.考虑到内存对齐,下面的结构在内存中的大小为8字节,如图所示,但如果字段反转则只有6个字节,因为不需要任何对齐填充.这种排序保证存在于C结构,C类(和结构)和Objective-C类中.对于Swift类和结构中的字段,存储顺序是否同样有保证?或者,编译器是否在编译时为您重新安排它们?

  7. 泛型 – 如何为所有Integer类型创建一个通用的整数到十六进制函数?

    非常简单的解决方案是将输入值合并到.toIntMax()中的IntMax中:注意:这仅适用于0…UInt32.max值.已添加:这适用于所有可用的整数类型/值.>.toIntMax()将T转换为具体的整数类型.>/16而不是>>4.

  8. swift – SceneKit – 自定义几何体不显示

    我应该看到2个黄色三角形,但我什么也看不见.我用它如下:我看到其他节点具有非自定义几何体.怎么了?

  9. Win32 C/C++从内存缓冲区加载图像

    这是初始化代码:

  10. Windows – ConnectEx要求套接字“最初绑定”,但是要什么?

    ConnectEx功能需要“未连接的,先前绑定的套接字”.实际上,如果我省略了我的示例中的bind步骤(见下文),则ConnectEx在WSAEINVAL失败.这是我目前的理解:在调用ConnectEx之前,bind将套接字调用到INADDR_ANY和端口0:或者对于IPv6套接字:这允许操作系统为我们的套接字分配本地地址.connect自动执行此步骤,但ConnectEx不会.我的问题是:>我的评估是否正确?>有没有办法对地址族进行无关的自动绑定,还是我必须手动处理AF_INET,AF_INET6,AF

随机推荐

  1. iOS实现拖拽View跟随手指浮动效果

    这篇文章主要为大家详细介绍了iOS实现拖拽View跟随手指浮动,文中示例代码介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们可以参考一下

  2. iOS – genstrings:无法连接到输出目录en.lproj

    使用我桌面上的项目文件夹,我启动终端输入:cd然后将我的项目文件夹拖到终端,它给了我路径.然后我将这行代码粘贴到终端中找.-name*.m|xargsgenstrings-oen.lproj我在终端中收到此错误消息:genstrings:无法连接到输出目录en.lproj它多次打印这行,然后说我的项目是一个目录的路径?没有.strings文件.对我做错了什么的想法?

  3. iOS 7 UIButtonBarItem图像没有色调

    如何确保按钮图标采用全局色调?解决方法只是想将其转换为根注释,以便为“回答”复选标记提供更好的上下文,并提供更好的格式.我能想出这个!

  4. ios – 在自定义相机层的AVFoundation中自动对焦和自动曝光

    为AVFoundation定制图层相机创建精确的自动对焦和曝光的最佳方法是什么?

  5. ios – Xcode找不到Alamofire,错误:没有这样的模块’Alamofire’

    我正在尝试按照github(https://github.com/Alamofire/Alamofire#cocoapods)指令将Alamofire包含在我的Swift项目中.我创建了一个新项目,导航到项目目录并运行此命令sudogeminstallcocoapods.然后我面临以下错误:搜索后我设法通过运行此命令安装cocoapodssudogeminstall-n/usr/local/bin

  6. ios – 在没有iPhone6s或更新的情况下测试ARKit

    我在决定下载Xcode9之前.我想玩新的框架–ARKit.我知道要用ARKit运行app我需要一个带有A9芯片或更新版本的设备.不幸的是我有一个较旧的.我的问题是已经下载了新Xcode的人.在我的情况下有可能运行ARKit应用程序吗?那个或其他任何模拟器?任何想法或我将不得不购买新设备?解决方法任何iOS11设备都可以使用ARKit,但是具有高质量AR体验的全球跟踪功能需要使用A9或更高版本处理器的设备.使用iOS11测试版更新您的设备是必要的.

  7. 将iOS应用移植到Android

    我们制作了一个具有2000个目标c类的退出大型iOS应用程序.我想知道有一个最佳实践指南将其移植到Android?此外,由于我们的应用程序大量使用UINavigation和UIView控制器,我想知道在Android上有类似的模型和实现.谢谢到目前为止,guenter解决方法老实说,我认为你正在计划的只是制作难以维护的糟糕代码.我意识到这听起来像很多工作,但从长远来看它会更容易,我只是将应用程序的概念“移植”到android并从头开始编写.

  8. ios – 在Swift中覆盖Objective C类方法

    我是Swift的初学者,我正在尝试在Swift项目中使用JSONModel.我想从JSONModel覆盖方法keyMapper,但我没有找到如何覆盖模型类中的Objective-C类方法.该方法的签名是:我怎样才能做到这一点?解决方法您可以像覆盖实例方法一样执行此操作,但使用class关键字除外:

  9. ios – 在WKWebView中获取链接URL

    我想在WKWebView中获取tapped链接的url.链接采用自定义格式,可触发应用中的某些操作.例如HTTP://我的网站/帮助#深层链接对讲.我这样使用KVO:这在第一次点击链接时效果很好.但是,如果我连续两次点击相同的链接,它将不报告链接点击.是否有解决方法来解决这个问题,以便我可以检测每个点击并获取链接?任何关于这个的指针都会很棒!解决方法像这样更改addobserver在observeValue函数中,您可以获得两个值

  10. ios – 在Swift的UIView中找到UILabel

    我正在尝试在我的UIViewControllers的超级视图中找到我的UILabels.这是我的代码:这是在Objective-C中推荐的方式,但是在Swift中我只得到UIViews和CALayer.我肯定在提供给这个方法的视图中有UILabel.我错过了什么?我的UIViewController中的调用:解决方法使用函数式编程概念可以更轻松地实现这一目标.

返回
顶部