Today, I am finally more and more knowledgeable. Following the implementation of the chat interface from 0 to 1 (I), today we will talk about the bottom bar of the chat page.

Implementation of chat interface from 0 to 1 (2)

The demo address is JPChatBottomBar

Writing in the front

The JPChatBottomBar is similar to the horizontal bottom page of the current mainstream chat page. Similar to wechat:

The reason for the first from this horizontal bar to torment, personal idea: from the function, this module can be independent from Im, but it can screen out the difference caused by the choice of communication part of the third party service, service in the whole framework of chat. If the framework changes in the future, the impact on this module will be minimal.

Although the JPChatBottomBar is not the core of the entire framework, it provides the basic service function of editing messages. In the process of imitating the implementation of a bar, I also encountered some problems.

Due to the length of the article is mainly used to describe some of the more complex implementation or some of the details of the problem, simple logical judgment implementation will not appear here.

The address of the demo is placed here: JPChatBottomBar– Github address

Functional analysis

Combined with the previous figure, we can preliminatively summarize the functions that JPChatBottomBar should implement as follows:

  • 1. Keyboard switching;
  • 2. A user generates a voice message.
  • 3. User operations on text messages (edit, delete, send);
  • 4. Embedded emojis in user text messages;
  • 5. The user clicked on a ‘big’ emoji (similar to some GIFs);
  • 6. Users click more buttons to select other functions (similar to wechat: gallery, shooting, sending address, etc.).

Here, we pass the user’s actions (text messages, voice messages, and so on) out through a delegate JPChatBottomBarDelegate that provides services to other modules in the chat framework.

To illustrate the classes I use:

  • 1.JPChatBottomBar: Entire horizontal bar
  • 2.previewThe classes in the file are used to implement a preview of the emojis
  • 3.imageResourceThis folder contains the image resources used in this demo
  • 4.JPEmojiManager: this class is used to read all emoji resources
  • 5.JPPlayerHelperThis class is used for recording and broadcasting effects
  • 6.JPAttributedStringHelper: To realize the mutual transfer of expression steamed stuffed bun symbol and expression package picture
  • About 7.model.JPEmojiModelUsed to bind a single emoji,JPEmojiGroupModelUsed to bind an entire set of emojis.
  • 8.categoryContains some commonly used utility classes

Next, let me list the above functions should be implemented, to tell me how to achieve each function or in the process of implementation I encountered problems.

Keyboard switch

The effect can be viewed in my blog or download the demo.

As you can see, controller.view slides up as the keyboard pops up to avoid blocking the user’s chat page. This is also a very basic feature.

But here’s the detail:

Textview. inputView (kVO); textView.inputView (kVO); As you can see from the code in the demo, after the chatBottomBar has reached its desired position, call -TextView reloadInputView to wake up the keyboard. This allows the keyboard to pop up from below without a small overlay.

However, it is found that the height of the system keyboard is not all the same, for example, the Chinese pinyin nine-grid keyboard is higher than the 26 keys, while the Japanese nine-grid keyboard is lower than the 26 keys, so it is not very complete.

So finally used to monitor the keyboard pop-up notice to achieve:

Wechat in this part of the implementation does not have such coverage effect, wechat wait for viewController.view to the position, and then let the keyboard pop up from below.

Listen for notifications on the keyboard:

// jpChatBottombar. m // listen for changes in the old and new values of textView.inputView [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWillChangeRect:) name:UIKeyboardWillChangeFrameNotification object:nil]; - (void)keyboardWillChangeRect:(NSNotification *)noti { NSValue * aValue = noti.userInfo[UIKeyboardFrameBeginUserInfoKey]; self.oldRect = [aValue CGRectValue]; NSValue * newValue = noti.userInfo[UIKeyboardFrameEndUserInfoKey]; self.newRect = [newValue CGRectValue]; [UIView animateWithDuration: 0.3 animations: ^ {if(self.superview.y == 0) {
            self.superview.y -= self.newRect.size.height;
        }else {
            self.superview.y -=(self.newRect.size.height - self.oldRect.size.height);
        }
    } completion:^(BOOL finished) {
    }];
}
Copy the code

Readers passing by if there is a better way to improve, can switch the keyboard to avoid this kind of coverage, welcome to propose, I am also learning iOS small white 😂. Thank you 🙏 🙏 🙏

All that remains about switching the keyboard is switching the state of the keyboard (changing the corresponding view) according to the user’s click. That’s all for this part ☺️👌🏾.

Voice message

After referring to others’ Demo(iOS imitation wechat recording control Demo), I also implemented one.

First, I will introduce the class I use:

  • 1.JPPlayerHelper: Realize recording and broadcasting functions.
  • 2.JPAudioView: Displays the recording status

Let me outline the steps of implementation. First of all, the two classes are not interdependent; they are coupled in JPChatBottomBar.

Slide up cancel, slide down continue recording effect

I use the following three methods in JPAudioView to make the audioView judge the user gesture changes (start clicking, slide up and down, finger away) and make the corresponding processing, the code is as follows:

- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event ;
- (void)touchesEnded:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event;
Copy the code

Solve your OWN UI problems in the above three areas, and use blocks externally to record and update your UI according to voice intensity:

(JPChatBottomBar. M) - (JPAudioView *) AudioView {if(! _audioView) { JPAudioView * tmpView = [[JPAudioView alloc] initWithFrame:CGRectMake(self.textView.x, self.textView.y, self.textView.width, _btnWH )]; [self addSubview:tmpView]; _audioView = tmpView; // Implement audioView __weak Typeof (self) wSelf = self; _audioView.pressBegin = ^{ [wSelf.audioViewsetAudioingImage:[UIImage imageNamed:@"zhengzaiyuyin_1"] text:@"Release your finger and slide up to cancel."]; // Start recording [wself. recoder jp_recorderStart]; }; _audioView.pressingUp = ^{ [wSelf.audioViewsetAudioingImage:[UIImage imageNamed:@"songkai"] text:@"Unsend by releasing your finger."];
        };
        _audioView.pressingDown = ^{
            NSString * imgStr = [NSString stringWithFormat:@"zhengzaiyuyin_%d",imageIndex];
            [wSelf.audioView setAudioingImage:[UIImage imageNamed:imgStr] text:@"Release send, slide up cancel."];
        };
        _audioView.pressEnd = ^{
            [wSelf.audioView setAudioViewHidden]; [wSelf.recoder jp_recorderStop]; NSString * filePath = [wSelf.recoder getfilePath]; NSData * audioData = [NSData dataWithContentsOfFile:filePath]; /// The voice message data is passed out through the proxyif(wSelf.agent && [wSelf.agent respondsToSelector:@selector(msgEditAgentAudio:)]){
                [wSelf.agent msgEditAgentAudio:audioData];
            }
            if(wSelf.msgEditAgentAudioBlock){ wSelf.msgEditAgentAudioBlock(audioData); }}; }return _audioView;
}
Copy the code

The second is the key point: refresh the UI of the audioView according to the intensity of the voice

The effect can be viewed on the blog or download the demo.

The first method to obtain the average of speech intensity is as follows:

// update the measurement value - (void)updateMeters; /* call to refresh meter values */ ///float)peakPowerForChannel:(NSUInteger)channelNumber; // get the average value - (float)averagePowerForChannel:(NSUInteger)channelNumber; 
Copy the code

When obtaining speech intensity, you need to update the measured value with updateMeters. Then we can measure the value, peak-to-peak value, according to a certain algorithm to calculate the relative intensity of sound at this time. Here, the algorithm is rubbish and I designed a simple one:

// JPPlayerHelper.m - (CGFloat)audioPower { [self.recorder updateMeters]; // Update the measurement valuefloatpower = [self.recorder averagePowerForChannel:0]; // Average the audio of the first channel, note that the audio intensity is [-160,0],0 maximum //float powerMax = [self.recorder peakPowerForChannel:0];
//    CGFloat progress = (1.0/160.0) * (power + 160);
    power = power + 160 - 50;
    int dB = 0;
    if (power < 0.f) {
        dB = 0;
    } else if(power < 40.f) {dB = (int)(power * 0.875); }else if (power < 100.f) {
        dB = (int)(power - 15);
    } else if(power < 110.f) {dB = (int)(power * 2.5-165); }else {
        dB = 110;
    }
    return dB;
}
Copy the code

About the algorithm of this piece, if you readers have a better method, welcome to put forward, I am also a thirst for knowledge of the small white.

We can obtain the corresponding sound decibel intensity through the above method, we can do some external processing: for example, I did, when the new measurement value is higher than the old measurement value of a certain value, do the UI refresh operation to raise the decibel, when the low measurement value of the operation to lower the decibel UI, see the following code:

// JPChatbottomBar.m
- (void) jpHelperRecorderStuffWhenRecordWithAudioPower:(CGFloat)power{
    NSLog(@"%f",power);
    NSString * newPowerStr =[NSString stringWithFormat:@"%f",[self.helper audioPower]];
    if([newPowerStr floatValue] > [self.audioPowerStr floatValue]) {
        if(imageIndex == 6){
            return;
        }
        imageIndex ++;
    }else {
        if(imageIndex == 1){
            return;
        }
        imageIndex --;
    }
    if(self.audioView.state == JPPressingStateUp) {
        self.audioView.pressingDown();
    }
    self.audioPowerStr =  newPowerStr;;
}
Copy the code

Secondly, I added a timer in JPPlayerHepler to trigger repeatedly call the agent above method (- (void) jpHelperRecorderStuffWhenRecordWithAudioPower: (CGFloat) power), So that it can refresh the UI, because if there is no timer, there is no event to trigger the audioViewUI refresh operation, timer related methods are as follows:

// JPPlayerHelper.m
-(NSTimer *)timer{
    if(! _timer) {_timer = [NSTimer scheduledTimerWithTimeInterval: 0.35 target: self selector: @ the selector (doOutsideStuff) userInfo:nil repeats:YES];
    }
    return _timer;
}
- (void)doOutsideStuff {
    
    if(self.delegate && [self.delegate respondsToSelector:@selector(jpHelperRecorderStuffWhenRecordWithAudioPower:)]){ [self.delegate jpHelperRecorderStuffWhenRecordWithAudioPower:[self audioPower]]; }}Copy the code

Finally, after recording, our voice data is provided externally through JPChatBottomBarDelegate’s proxy method.

The algorithm for acquiring voice intensity is not optimal, and I think my algorithm is also clumsy and has shortcomings (insensitive to changes in user voice intensity). If readers passing by have any good suggestions, please feel free to add, I will also adopt, thank you 🙏🙏🙏.

‘More’ items on the keyboard

The ‘more’ keyboard in JPChatBottomBar is similar to wechat’s.

Developers at the time of use if you want to type in the realization of the function of the different, as long as in the/ImageResource/JPMoreBundle bundle JPMorePackageList. Add the corresponding item in the file

Internal adaptation effect has been done, but when the number of items exceeds 8, the paging effect like wechat has not been completed. I will continue to improve it later.

When the user clicks on one of the above items, we pass the event to the outside via JPChatBottomBarDelegate, and the developer can do the processing in the outermost layer. Depending on which item is clicked, the corresponding function of the method is responded, similar to the following code:

// ViewController.m
NSString * kJPDictKeyImageStrKey = @"imageStr";
- (void)msgEditAgentClickMoreIVItem:(NSDictionary *)dict {
    NSString * judgeStr = dict[kJPDictKeyImageStrKey];
    if([judgeStr isEqualToString:@"photo"]){
        NSLog(@"Click on the atlas.");
    }else if([judgeStr isEqualToString:@"camera"]){
        NSLog(@"Clicked on the camera.");
    }else if([judgeStr isEqualToString:@"file"]) {
        NSLog(@"Click on file");
    }else if([judgeStr isEqualToString:@"location"]) {
        NSLog(@"Clicked the location"); }}Copy the code

At the beginning, I did not want to expose the item clicked by the user, but later I realized that the developer faced various businesses. In order to better expand and simplify the structure of the JPChatBottomBar, I wrote this part through the proxy.

Text message editing (sending, deleting, embedding emoji text)

I spent a lot of time on this part. Before, there was no real method to embed emoticons, but the editing of emoticons was realized by calling the native emoticons

This part mainly considers what we should do when users click on emojis. Let’s start with our problem and simplify things a little bit.

There are two kinds of memes

Looking at wechat, there are two types of emoticons: one can be embedded in a text box, and the other is directly sent to the chat partner after the user clicks on the emoticons. We call these two emoticons SmallEmoji (the former) and LargeEmoji (the latter).

The latter can be implemented in such a way that it is exposed through each interlayer proxy.

// jpchatbottombar. h /** * User clicked the emoji button on the keyboard * @param bigEmojiData: Expression of packet data * / - (void) msgEditAgentSendBigEmoji (bigEmojiData NSData *);Copy the code

I’ll talk about “SmallEmoji embedded text” later.

The loading of emojis

Here I will expression through JPEmojiManager package from/ImageResource/JPEmojiBundle load bundle, for one expression in JPEmojiPackageList. Has a corresponding item in the plist binding, therefore, If we have a new emoticon pack later, we can just save the picture and add a new item in the plist file. The code realizes the user to dynamically add emoticon pack in the same way.

To avoid reading files repeatedly, I write JPEmojiManager as a singleton.

// JPEmojiManager.h

/**
 *  @returnGet all emoticons */ - (NSArray <JPEmojiGroupModel *> *)getEmogiGroupArr; * @param group: which set of emoticons was selected * @param Page: page number * @return*/ - (NSArray <JPEmojiModel *> *)getEmojiArrGroup:(NSInteger)group page:(NSInteger)page;Copy the code

You can see two classes here

  • JPEmojiModelUsed to bind a single emoji,
  • JPEmojiGroupModelUsed to bind an entire set of emojis

These two methods in JPEmojiManager serve more of the paging emoji effect (which I’ll cover below)

The pagination effect of emojis

After seeing the demo of other emojis on Github, some of them did not realize sliding to switch emojis group, so I implemented one myself, the effect can be viewed in the blog or demo.

The realization of the effect: through three views to reuse, in ScrollView to take turns to display.

// JPEmojiInputView.m
Pragma mark three views reuse
@property (strong, nonatomic) JPInputPageView * leftPV;
@property (strong, nonatomic) JPInputPageView * currentPV;
@property (strong, nonatomic) JPInputPageView * rightPV;
Copy the code

After taking out the emojis of the corresponding page in JPEmojiManager, then call the method below to pass emojis of each page into each page

* @param emojiArr: a page of emojiArr (as data for the built-in CollectionView) */ - (void)setEmojiArr:(NSArray <JPEmojiModel *> *)emojiArr isShowLargeImage:(BOOL)value;
Copy the code

Let’s start with three pages to show all the emoticons:

  • 1. First expand the contentSize of the ScrollView to accommodate the total number of pages of the emoji
  • 2. Except for the first page of the first set of emojis and the last page of the last set of emojis (where there is nothing to do), the page displayed to the user is always:self.currentPV.
  • 3. When you swipe your finger left to show the next page of emoticons,leftPvMove to the far right while removing the page’s emojis and getting ready for display. Once this is done, it is a matter of replacing the water in the cup by assigning each of the three reusable views:
// JPEmojiInputView.m
JPInputPageView * tmpView ;
 tmpView = self.leftPV;
 self.leftPV = self.currentPV;
self.currentPV = self.rightPV;
self.rightPV = tmpView;
Copy the code

I’ll show you the underlying implementation in the following image, which might make it easier to understand:

When a user swipes a finger to the right to show the previous page, the bottom is implemented in a similar way, and so on. See – (void)scrollViewDidScroll:(UIScrollView *)scrollView in jpemojiinputview.m for more details.

Click SmallEmoji to embed the text

TextView and textField in iOS can automatically recognize the system’s native expressions:

TextView and textField cannot be recognized directly for the small expressions added by our developers.

Here we can refer to the implementation of several mainstream apps.

  • 1. Weibo: Click the emoticon package to embed the emoticon picture;
  • 2. Wechat: Click the non-native emoticon to embed the description text of the emoticon.

And notice that when we will be ‘graphic/text message is sent out after we server, generally is not information parsing in the string picture, as a result, the underlying is still of plain text messages to the server, and do the processing on the inside of the picture information transformed into expression, the descriptive text for the package below I use a picture to explain this question:

As you can see, we need to convert the text locally into plain text before sending it to the server. The matching of emoticons and their descriptive texts is mainly carried out through two systems of classes.

  • 1.NSTextAttachment: the ‘plug-in’ in the text, we use this class to insert the image.
  • 2.NSRegularExpression: Uses regular expressions to match the emoji description text in the string.

Here’s a comprehensive syntax for regular expressions: regular expressions.

Here my re matches the following characters:

NSRegularExpression *regex = [NSRegularExpression regularExpressionWithPattern:@"\\[.+?\\]" options:0 error:NULL];

After matching each description text of emojis, an array will be generated to store these matching results (description text, image resources, description text position in the original string), and then the array will be traversed and these description text will be replaced by inserting the picture plug-in textAttachment. Note here that each replacement, Ranges of emoji text that have not been replaced will send changes, and we need to decrement their original position range.location. The specific implementation method can refer to the following code:

// JPAttributedStringHelper.m
- (NSAttributedString *)getTextViewArrtibuteFromStr:(NSString *)str {
    if(str.length == 0) {
        return nil;
    }
    NSMutableAttributedString * attStr = [[NSMutableAttributedString alloc]  initWithString:str
                                                                                 attributes:[JPAttributedStringConfig getAttDict]];
    
    NSMutableParagraphStyle * paraStyle = [[NSMutableParagraphStyle alloc] init];
    paraStyle.lineSpacing = 5;
    [attStr addAttribute:NSParagraphStyleAttributeName value:paraStyle range:NSMakeRange(0, attStr.length)];
    
    NSArray<JPEmojiMatchingResult *> * emojiStrArr = [self analysisStrWithStr:str];
    if(emojiStrArr && emojiStrArr.count ! = 0) { NSInteger offset = 0; // The offset of the emoji textfor(JPEmojiMatchingResult * result in emojiStrArr ){
            if(result. EmojiImage) {/ / expressions of special characters NSMutableAttributedString * emojiAttStr = [[NSMutableAttributedString alloc] initWithAttributedString:[NSAttributedString attributedStringWithAttachment:result.textAttachment]];if(! emojiAttStr) {continue; } NSRange actualRange = NSMakeRange(result.range.location - offset, result.range.length); [attStr replaceCharactersInRange:actualRange withAttributedString:emojiAttStr]; // Offset += (result.range.length-1); }}return attStr;
    }else {
        return[[NSAttributedString alloc] initWithString:str attributes:[JPAttributedStringConfig getAttDict]];; }}Copy the code

The result can be viewed on the blog or download the demo.

In order to delete the emoticon description text after pressing the delete key, we need to determine whether the position of TextView. selectedRange is the emoticon description text. The code is as follows:

// clickDeleteBtnInputView:(JPEmojiInputView *)inputView {NSString * souceText = [self.textView.text substringToIndex:self.textView.selectedRange.location];if(souceText.length == 0) {
        return;
    }
    NSRange  range = self.textView.selectedRange;
    if(range.location == NSNotFound) {
        range.location = self.textView.text.length;
    }
    if(range.length > 0) {
        [self.textView deleteBackward];
        return;
    }else{// The regular expression matches the range of text to be replacedif([souceText hasSuffix:@"]"]){// indicates that the last selected field is emojiif([[souceText substringWithRange:NSMakeRange(souceText.length-2, 1)] isEqualToString:@"]"]) {// indicates that this is just a single character @"]"
                [self.textView deleteBackward];
                return; } // Regular expression NSString * pattern = @"\\[[a-zA-Z0-9\\u4e00-\\u9fa5]+\\]";
            NSError *error = nil;
            NSRegularExpression * re = [NSRegularExpression regularExpressionWithPattern:pattern options:NSRegularExpressionCaseInsensitive error:&error];if(! re) {NSLog(@"% @", [error localizedDescription]); } NSArray *resultArray = [re matchesInString:souceText options:0 range:NSMakeRange(0, souceText.length)];if(resultArray.count ! NSTextCheckingResult *checkingResult = resultarray.lastObject; NSString * resultStr = [souceText substringWithRange:NSMakeRange(0, souceText.length - checkingResult.range.length)]; self.textView.text = [self.textView.text stringByReplacingCharactersInRange:NSMakeRange(0, souceText.length) withString:resultStr]; self.textView.selectedRange = NSMakeRange(resultStr.length , 0); }else{ [self.textView deleteBackward]; }}else{// indicates that the last one is not a emoji [self.textView deleteBackward]; }} / / textView adaptive [self textViewDidChange: self. The textView]; }Copy the code

We can take a look at demo🤓.

See here, I have written nearly 5K words 😂

A preview of the memes

The preview effect of the emojis is divided into

  • 1. Previews of small emoticons
  • 2. Big expression preview (GIF playback)

show

The bottom view of the former is an already drawn image,

For the bottom view of the latter, I used a redraw mechanism (QuartzCore framework and rewrite -drawRect: method) to fill in the stroke and the color of the view.

// JPGIfPreview.m - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); //1. Add the drawing path CGContextMoveToPoint(context,0,_filletRadius); CGContextAddLineToPoint(context, 0, _squareHeight - _filletRadius); CGContextAddQuadCurveToPoint(context, 0, _squareHeight ,_filletRadius, _squareHeight); CGContextAddLineToPoint(context, (_squareWidht - _triangleWdith )/2,_squareHeight); CGContextAddLineToPoint(context,BaseWidth /2 , BaseHeight); CGContextAddLineToPoint(context, (_squareWidht + _triangleWdith )/2,_squareHeight); CGContextAddLineToPoint(context, _squareWidht - _filletRadius,_squareHeight); CGContextAddQuadCurveToPoint(context, _squareWidht, _squareHeight ,_squareWidht, _squareHeight - _filletRadius); CGContextAddLineToPoint(context, _squareWidht ,_filletRadius); CGContextAddQuadCurveToPoint(context, _squareWidht, 0 ,_squareWidht - _filletRadius, 0); CGContextAddLineToPoint(context,_filletRadius ,0); CGContextAddQuadCurveToPoint(context, 0, 0 ,0, _filletRadius); CGFloat backColor[4] = {1,1,1, 0.86}; CGFloat layerColor [4] = {0.9, 0.9, 0.9, 0}; //3. Set the stroke color and fill the color CGContextSetFillColor(context, backColor); CGContextSetStrokeColor(context, layerColor); //4. Draw CGContextDrawPath(context, kCGPathFillStroke); }Copy the code

Coordinate conversion

With the layout complete, the next step is to add our preview view to the interface.

Add longPress to collect longPress on collectionView, monitor the state of gesture, calculate the cell corresponding to the position of gesture, and display the preview effect of its content.

Here I chose [UIApplication sharedApplication].Windows.lastObject as the superView, which is the window of the emojiInputView.

One important thing to note here is that the windowCGPointZero is counted from the top left corner of the phone, so when converting the coordinates (cell is for each emoji, I’m using the collectionView to show the emoji on each page), I converted cell. Frame to frame on window:

CGRect  rect = [[UIApplication sharedApplication].windows.lastObject convertRect:cell.frame fromView:self.collectionView];
Copy the code

Once the coordinates are converted, all that’s left is to add them.

GIF play

GIF playback effect, I am also the first contact, here to see a good article: ios-GIF pictures show N ways (native + third party), there is an introduction to native and third party implementation. In order to reduce the dependency of the project library, HERE I use the code of the native way inside, you can click the link to see the internal code, here do not do too much description (5.3K words 😂😂😂).

reference

These articles have been of some help to me and HOPEFULLY to you

Ios-gif image display N ways (native + third party)

WWDC 2017 – Key to Optimizing the Input Experience: Full introduction to keyboard Tips

OC realizes the effect of ios similar to wechat input box popping up with keyboard

Write in the last

After the JPChatBottomBar is complete, the entire framework can access user-edited messages or other user actions through the JPChatBottomBarDelegate.

Here is the JPChatBottomBar part, complete this part of the content, from the demo to the completion of the article, encountered many problems 😂. For example, I used two methods to realize the issue of “switching keyboard coverage”, or “switching emoticons group”, which took some time.

For the realization of the technology in my article and demo, if readers have a better way, welcome to 🙏, thank 🙏.

I also hope my article can bring you help.

If you help, please give me a Star bar ✨. Thank you very much!